report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
No single hotline or database captures the universe of identity theft victims. Some individuals do not even know that they have been victimized until months after the fact, and some known victims may choose not to report to the police, credit bureaus, or established hotlines. Thus, it is difficult to fully or accurately quantify the prevalence of identity theft. Some of the often-quoted estimates of prevalence range from one-quarter to three-quarters of a million victims annually. Usually, these estimates are based on limited hotline reporting or other available data, in combination with various assumptions regarding, for example, the number of victims who do not contact credit bureaus, the FTC, the SSA/OIG, or other authorities. Generally speaking, the higher the estimate of identity theft prevalence, the greater the (1) number of victims who are assumed not to report the crime and (2) number of hotline callers who are assumed to be victims rather than “preventative” callers. We found no information to gauge the extent to which these assumptions are valid. Additionally, there are no readily available statistics on the number of victims who may have contacted their banks or credit card issuers only and not the credit bureaus or other hotlines. Nevertheless, although not specifically or comprehensively quantifiable, the prevalence and cost of identity theft seem to be increasing, according to the available data we reviewed and many officials of the public and private sector entities we contacted. The following presents summary information for each of the topics that we addressed. More detailed information is presented in appendixes II through V, respectively. As we reported in 1998, there are no comprehensive statistics on the prevalence of identity theft. Similarly, during our current review, various officials noted that precise, statistical measurement of identity theft trends is difficult due to a number of factors. Generally, federal law enforcement agencies do not have information systems that facilitate specific tracking of identity theft cases. For example, while the amendments made by the Identity Theft Act are included as subsection (a)(7) of section 1028, Title 18 of the U.S. Code, EOUSA does not have comprehensive statistics on offenses charged specifically under that subsection. EOUSA officials explained that, except for certain firearms statutes, docketing staff are asked to record cases under only the U.S. Code section, not the subsection or the sub-subsection. Also, the FBI and the Secret Service noted that identity theft is not typically a stand-alone crime; rather, identity theft is almost always a component of one or more white-collar or financial crimes, such as bank fraud, credit card or access device fraud, or the use of counterfeit financial instruments. Nonetheless, while recognizing measurement difficulties, a number of data sources can be used as proxies or indicators for gauging the prevalence of such crime. These sources can include consumer complaints and hotline allegations, as well as law enforcement investigations and prosecutions of identity theft-related crimes such as bank fraud and credit card fraud. Each of these various sources or measures seems to indicate that the prevalence of identity theft is growing: Consumer reporting agency data. Generally, in the view of consumer reporting agency officials, the most reliable indicator of the incidence of identity theft is the number of 7-year fraud alerts placed on consumer credit files. Generally, fraud alerts constitute a warning that someone may be using the consumer’s personal information to fraudulently obtain credit. Thus, a purpose of the alert is to advise credit grantors to conduct additional identity verification or contact the consumer directly before granting credit. One of the three consumer reporting agencies estimated that its 7-year fraud alerts involving identity theft increased 36 percent over 2 recent years—from about 65,600 in 1999 to 89,000 in 2000. A second agency reported that its 7-year fraud alerts increased about 53 percent in recent comparative 12-month periods; that is, the number increased from 19,347 during one 12-month period (July 1999 through June 2000) to 29,593 during the more recent period (July 2000 through June 2001). The third agency reported about 92,000 fraud alerts for 2000 but was unable to provide information for any earlier year. Also, due largely to increased public awareness about identity theft, the number of inquiries received by the fraud units of consumer reporting agencies is at an all-time high. However, an industry official opined that the number of inquiries is not a reasonable measure of the incidence of identity theft because virtually all individuals whose wallet or purse is lost or stolen will now call the consumer reporting agencies as a precautionary measure. FTC data. From its establishment in November 1999 through September 2001, FTC’s Identity Theft Data Clearinghouse received a total of 94,100 complaints from victims, including 16,784 complaints transferred to the FTC from the SSA/OIG. In the first month of operation, the Clearinghouse answered an average of 445 calls per week. By March 2001, the average number of calls answered had increased to over 2,000 per week. In December 2001, the weekly average was about 3,000 answered calls. However, FTC officials noted that identity theft-related statistics may, in part, reflect enhanced consumer awareness and reporting. SSA/OIG data. SSA/OIG has reported a substantial increase in call-ins of identity theft-related allegations to its Fraud Hotline in recent years. Allegations involving SSN misuse, for example, increased more than fivefold, from about 11,000 in fiscal year 1998 to about 65,000 in fiscal year 2001. To some extent, the increased number of allegations may be due to additional Fraud Hotline staffing, which increased from 11 to over 50 personnel during this period. However, SSA/OIG officials attributed the trend in allegations partly to a greater incidence of identity theft. Also, irrespective of staffing levels, SSA/OIG data indicate that about 81 percent of all allegations of SSN misuse relate directly to identity theft. Federal law enforcement data. Generally, although federal law enforcement agencies do not have information systems that facilitate specific tracking of identity theft cases, the agencies provided us case statistics for identity theft-related crimes. Regarding bank fraud, for instance, the FBI reported that its arrests increased from 579 in 1998 to 645 in 2000—and was even higher (691) in 1999. The Secret Service reported that, for recent years, it has redirected its identity theft-related efforts to focus on high-dollar, community-impact cases. Thus, even though the total number of identity theft-related cases closed by the Secret Service decreased from 8,498 in fiscal year 1998 to 7,071 in 2000, the amount of fraud losses prevented in these cases increased from a reported average of $73,382 in 1998 to an average of $217,696 in 2000. The Postal Inspection Service, in its fiscal year 2000 annual report, noted that identity theft is a growing trend and that the agency’s investigations of such crime had “increased by 67 percent since last year.” (See app. II.) We found no comprehensive estimates of the cost of identity theft to the financial services industry. Some data on identity theft-related losses—such as direct fraud losses reported by the American Bankers Association (ABA) and payment card associations—indicated increasing costs. Other data, such as staffing of the fraud departments of banks and consumer reporting agencies, presented a mixed and/or incomplete picture. For example, one consumer reporting agency reported that staffing of its fraud department had doubled in recent years, whereas another agency reported relatively constant staffing levels. Furthermore, despite concerns about security and privacy, the use of e-commerce has grown steadily in recent years. Such growth may indicate greater consumer confidence but may also have resulted from an increase in the number of people who have access to Internet technology. Regarding direct fraud losses, in its year 2000 bank industry survey on check fraud, the ABA reported that total check fraud-related losses against commercial bank accounts—considering both actual losses ($679 million) and loss avoidance ($1.5 billion)—reached an estimated $2.2 billion in 1999, which was twice the amount in 1997. Regarding actual losses, the report noted that the 1999 figure ($679 million) was up almost 33 percent from the 1997 estimate ($512 million). However, not all check fraud- related losses were attributed to identity theft, which the ABA defined as account takeovers (or true name fraud). Rather, the ABA reported that, of the total check fraud-related losses in 1999, the percentages attributable to identity theft ranged from 56 percent for community banks (assets under $500 million) to 5 percent for superregional/money center banks (assets of $50 billion or more), and the average for all banks was 29 percent. The two major payment card associations, MasterCard and Visa, use very similar (although not identical) definitions regarding which categories of fraud constitute identity theft. Generally, the associations consider identity theft to consist of two fraud categories—account takeovers and fraudulent applications. Based on these two categories, the associations’ aggregated identity theft-related losses from domestic (U.S. operations) rose from $79.9 million in 1996 to $114.3 million in 2000, an increase of about 43 percent. The associations’ definitions of identity theft-related fraud are relatively narrow, in the view of law enforcement, which considers identity theft as encompassing virtually all categories of payment card fraud. Under this broader definition, the associations’ total fraud losses from domestic operations rose from about $700 million in 1996 to about $1.0 billion in 2000, an increase of about 45 percent. However, according to the associations, the annual total fraud losses represented about 1/10th of 1 percent or less of U.S. member banks’ annual sales volume during 1996 through 2000. Generally, the fraud losses are borne by the respective financial institution that issued the payment card. To reiterate, regarding direct fraud losses involving payment cards, we contacted MasterCard and Visa only. We did not obtain information about losses involving other general-purpose cards (American Express, Diners Club, and Discover), which account for about 25 percent of the market. Also, we did not obtain information about losses involving merchant-specific cards issued by retail stores. Furthermore, we did not obtain information from various other entities, such as insurance companies and securities firms, which may incur identity theft-related costs. Regarding staffing and cost of fraud departments, in its year 2000 bank industry survey on check fraud, the ABA reported that the amount of resources that banks devoted to check fraud prevention, detection, investigation, and prosecution varied according to bank size. For check fraud-related operating expenses (not including actual losses) in 1999, the ABA reported that over two-thirds of the 446 community banks that responded to the survey each spent less than $10,000, and about one-fourth of the 11 responding superregional/money center banks each spent $10 million or more for such expenses. One national consumer reporting agency told us that staffing of its Fraud Victim Assistance Department doubled in recent years, increasing from 50 individuals in 1997 to 103 in 2001. The total cost of the department was reported to be $4.3 million for 2000. Although not as specific, a second agency reported that the cost of its fraud assistance staffing was “several million dollars.” And, the third consumer reporting agency said that the number of fraud operators in its Consumer Services Center had increased in the 1990’s but has remained relatively constant at about 30 to 50 individuals since 1997. Regarding consumer confidence in online commerce, despite concerns about security and privacy, the use of e-commerce by consumers has steadily grown. For example, in the year 2000 holiday season, consumers spent an estimated $10.8 billion online, which represented more than a 50-percent increase over the $7 billion spent during the 1999 holiday season. Furthermore, in 1995, only one bank had a Web site capable of processing financial transactions but, by 2000, a total of 1,850 banks and thrifts had Web sites capable of processing financial transactions. The growth in e-commerce could indicate greater consumer confidence but could also result from the increasing number of people who have access to and are becoming familiar with Internet technology. According to an October 2000 Department of Commerce report, Internet users comprised about 44 percent (approximately 116 million people) of the U.S. population in August 2000. This was an increase of about 38 percent from 20 months prior. According to Commerce’s report, the fastest growing online activity among Internet users was online shopping and bill payment, which grew at a rate of 52 percent in 20 months. (See app. III.) Identity theft can cause substantial harm to the lives of individual citizens—potentially severe emotional or other nonmonetary harm, as well as economic harm. Even though financial institutions may not hold victims liable for fraudulent debts, victims nonetheless often feel “personally violated” and have reported spending significant amounts of time trying to resolve the problems caused by identity theft—problems such as bounced checks, loan denials, credit card application rejections, and debt collection harassment. For the 23-month period from its establishment in November 1999 through September 2001, the FTC Identity Theft Data Clearinghouse received 94,100 complaints from victims, including complaint data contributed by SSA/OIG. The leading types of nonmonetary harm cited by consumers were “denied credit or other financial services” (mentioned in over 7,000 complaints) and “time lost to resolve problems” (mentioned in about 3,500 complaints). Also, in nearly 1,300 complaints, identity theft victims alleged that they had been subjected to “criminal investigation, arrest, or conviction.” Regarding monetary harm, FTC Clearinghouse data for the 23-month period indicated that 2,633 victims reported dollar amounts as having been lost or paid as out-of-pocket expenses as a result of identity theft. Of these 2,633 complaints, 207 each alleged losses above $5,000; another 203 each alleged losses above $10,000. From its database of identity theft victims, after obtaining the individuals’ consent, FTC provided us the names and telephone numbers of 10 victims, whom we contacted to obtain an understanding of their experiences. In addition to the types of harm mentioned above, several of the victims expressed feelings of “invaded privacy” and “continuing trauma.” In particular, such “lack of closure” was cited when elements of the crime involved more than one jurisdiction and/or if the victim had no awareness of any arrest being made. For instance, some victims reported being able to file a police report in their state of residence but were unable to do so in other states where the perpetrators committed fraudulent activities using the stolen identities. Only 2 of the 10 victims told us they were aware that the perpetrator had been arrested. In a May 2000 report, two nonprofit advocacy entities—the California Public Interest Research Group (CALPIRG) and the Privacy Rights Clearinghouse—presented findings based on a survey (conducted in the spring of 2000) of 66 identity theft victims who had contacted these organizations. According to the report, the victims spent 175 hours, on average, actively trying to resolve their identity theft-related problems. Also, not counting legal fees, most victims estimated spending $100 for out-of-pocket costs. The May 2000 report stated that these findings may not be representative of the plight of all victims. Rather, the report noted that the findings should be viewed as “preliminary and representative only of those victims who have contacted our organizations for further assistance (other victims may have had simpler cases resolved with only a few calls and felt no need to make further inquiries).” (See app. IV.) Regarding identity theft and any other type of crime, the federal criminal justice system incurs costs associated with investigations, prosecutions, incarceration, and community supervision. Generally, we found that federal agencies do not separately maintain statistics on the person hours, portions of salary, or other distinct costs that are specifically attributable to cases involving identity theft. As an alternative, some of the agencies provided us with average cost estimates based, for example, on workyear counts for white-collar crime cases—a category that covers financial crimes, including identity theft. In response to our request, the FBI estimated that the average cost of an investigative matter handled by the agency’s white-collar crime program was approximately $20,000 during fiscal years 1998 to 2000, based on budget and workload data for the 3 years. However, an FBI official cautioned that the average cost figure has no practical significance because it does not capture the wide variance in the scope and costs of white-collar crime investigations. Also, the official cautioned that—while identity theft is frequently an element of bank fraud, wire fraud, and other types of white-collar or financial crimes—some cases (including some high-cost cases) do not involve elements of identity theft. Similarly, Secret Service officials—in responding to our request for an estimate of the average cost of investigating financial crimes that included identity theft as a component—said that cases vary so much in their makeup that to put a figure on average cost is not meaningful. Nonetheless, the agency’s Management and Organization Division made its “best estimate of the average cost” of a financial crimes investigation conducted by the Secret Service in fiscal year 2001. The resulting estimate was approximately $15,000. Secret Service officials noted that this estimate was for a financial crimes investigation and not specifically for an identity theft investigation. Also, the officials emphasized that, in the absence of specific guidelines establishing a standard methodology, average-cost figures provide no basis for making interagency comparisons. SSA/OIG officials responded that the agency’s information systems do not record time spent by function to permit making an accurate estimate of what it costs the OIG to investigate cases of SSN misuse. Also, in commenting on a draft of this report, the Commissioner, SSA, said that SSA/OIG’s priorities are appropriately targeted to SSA’s program integrity areas and business processes rather than specifically on identity theft, which is investigated by many different federal and state agencies. Regarding prosecutions, in fiscal year 2000, federal prosecutors dealt with approximately 13,700 white-collar crime cases, at an estimated average cost of about $11,400 per case, according to EOUSA. The total cases included those that were closed in the year, those that were opened in the year, and those that were still pending at yearend. EOUSA noted that the $11,400 figure was an estimate and that the actual cost could be higher or lower. According to Bureau of Prisons (BOP) officials, federal offenders convicted of white-collar crimes generally are incarcerated in minimum- security facilities. For fiscal year 2000, the officials said that the cost of operating such facilities averaged about $17,400 per inmate. After being released from BOP custody, offenders are typically supervised in the community by federal probation officers for a period of 3 to 5 years. For fiscal year 2000, according to the Administrative Office of the United States Courts, the cost of community supervision averaged about $2,900 per offender—which is an average for “regular supervision” without special conditions, such as community service, electronic monitoring, or substance abuse treatment. (See app. V.) Since our May 1998 report, various actions—particularly passage of federal and state statutes—have been taken to address identity theft. The federal statute,enacted in October 1998, made identity theft a separate crime against the person whose identity was stolen, broadened the scope of the offense to include the misuse of information as well as documents, and provided punishment—generally, a fine or imprisonment for up to 15 years or both. Under U.S. Sentencing Commission guidelines—even if (1) there is no monetary loss and (2) the perpetrator has no prior criminal convictions—a sentence as high as 10 to 16 months incarceration can be imposed. Regarding state statutes, at the time of our 1998 report, very few states had specific laws to address identity theft. Now, less than 4 years later, a large majority of states have enacted identity theft statutes. In short, federal and state legislation indicate that identity theft has been widely recognized as a serious crime across the nation. As such, a current focus for policymakers and criminal justice administrators is to ensure that relevant legislation is effectively enforced. Given the frequently cross-jurisdictional nature of identity theft crime, enforcement of the relevant federal and state laws presents various challenges, particularly regarding coordination of efforts. Although we have not evaluated them, initiatives designed to address these challenges include the following: After enactment of the 1998 Identity Theft Act, the Attorney General’s Council on White Collar Crime established a Subcommittee on Identity Theft. Purposes of the Subcommittee are to foster coordination of investigative and prosecutorial strategies and promote consumer education programs. Subcommittee leadership is vested in the Fraud Section of the Department of Justice’s Criminal Division, and membership includes representatives from various Justice, Treasury, and State Department components; SSA/OIG; the FTC; federal regulatory agencies, such as the Office of the Comptroller of the Currency and the Federal Deposit Insurance Corporation; and professional organizations, such as the International Association of Chiefs of Police (IACP), the National Association of Attorneys General, and the National District Attorneys Association. Various identity theft task forces, with multiagency participation (including state and local law enforcement), have been established to investigate and prosecute cases. Such task forces enable law enforcement to more effectively pursue cases that have multijurisdictional elements, such as fraudulent schemes that involve illegal activities in multiple counties or states. At the time of our review, the Secret Service was the lead agency in 37 task forces across the country that were primarily targeting financial and electronic crimes, many of which may include identity theft-related elements. Also, under the 1998 Identity Theft Act, the FTC established a toll-free number for victims to call and is compiling complaint information in a national Identity Theft Data Clearinghouse. FTC’s Consumer Sentinel Network makes this information available to federal, state, and local law enforcement. According to FTC staff, use of the Consumer Sentinel Network enables law enforcement to coordinate efforts and to pinpoint high-impact or other significant episodes of identity theft. Furthermore, there is general agreement that, in addition to investigating and prosecuting perpetrators, a multipronged approach to combating identity theft must include prevention efforts, such as limiting access to personal information. In this regard, federal law enacted in 1999, the Gramm-Leach-Bliley Act, directed financial institutions—banks, savings associations, credit unions, broker-dealers, investment companies, investment advisers, and insurance companies—to have policies, procedures, and controls in place to prevent the unauthorized disclosure of customer financial information and to deter fraudulent access to such information. Prevention efforts by financial institutions are particularly important, given FTC data showing that a large majority of consumer complaints regarding identity theft involve financial services—new credit card accounts opened, existing credit card accounts used, new deposit accounts opened, and newly obtained loans. Finally, given indications that the prevalence and cost of identity theft have increased in recent years, most observers agree that such crime certainly warrants continued attention from law enforcement, industry, and consumers. Also, due partly to the growth of the Internet and other communications technologies, there is general consensus that the opportunities for identity theft are not likely to decline. On February 5, 2002, we provided a draft of this report for comment to the Departments of Justice and the Treasury, FTC, SSA, and the Postal Inspection Service. The various agencies either expressed agreement with the information presented in the report or provided technical comments and clarifications, which have been incorporated in this report where appropriate. Also, the Commissioner, SSA, offered additional perspectives to clarify that the role of the SSA/OIG is to protect SSA’s programs and operations from fraud, waste, and abuse. That is, the Commissioner noted that the SSA/OIG’s priorities are appropriately targeted to SSA’s program integrity areas and business processes. On the other hand, the Commissioner said that most identity theft allegations referred to SSA/OIG are not related to these areas and processes. The Commissioner commented that identity theft is a serious crime and that many federal and state agencies have a role in investigating such crime. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of the report until 30 days after its issue date. At that time, we will send copies to interested congressional committees and subcommittees; the Attorney General; the Secretary of the Treasury; the Chief Postal Inspector, U.S. Postal Inspection Service; the Commissioner, SSA; and the Chairman, FTC. We will also make copies available to others on request. If you or your staff have any questions about this report or wish to discuss the matter further, please contact me at (202) 512-8777 or Danny R. Burton at (214) 777-5600. Other key contributors are acknowledged in appendix VII. In response to a request from Senator Dianne Feinstein, Chairwoman, and Senator Jon Kyl, Ranking Minority Member, Subcommittee on Technology, Terrorism and Government Information, Senate Committee on the Judiciary, and Senator Charles E. Grassley, we developed information on the extent or prevalence of identity theft; the cost of identity theft to the financial services industry, including direct fraud losses, staffing of fraud departments, and effect on consumer confidence in online commerce; the cost of identity theft to victims, including victim productivity losses, out-of-pocket expenses, and cost of being denied credit; and the cost of identity theft to the federal criminal justice system. The following sections discuss the scope and methodology of our work. To obtain information on the extent or prevalence of identity theft, we contacted private and public sector entities that could provide broad or national perspectives. For example, we contacted entities that operate call-in centers for receiving consumer complaints and hotline allegations, as well as federal law enforcement agencies responsible for investigating and prosecuting identity theft-related crimes. We did not canvass state and local law enforcement agencies. In contacting each of the following entities, we obtained relevant statistics and discussed with responsible officials any qualifications or caveats associated with the data: The three national consumer reporting agencies—Equifax, Inc.; Experian Information Solutions, Inc.; and Trans Union, LLC. Each agency has a call-in center that receives complaints or allegations from consumers. In obtaining statistics from the three agencies, we agreed to report the information in a manner not specifically identifiable to the respective agency. The Federal Trade Commission (FTC), which operates a toll-free telephone hotline for consumers to report identity theft. The Social Security Administration’s Office of the Inspector General, which operates a hotline to receive allegations of Social Security number misuse and program fraud. Two Department of Justice law enforcement components—the Executive Office for U.S. Attorneys (EOUSA) and the Federal Bureau of Investigation (FBI). Three Department of the Treasury law enforcement components—the Internal Revenue Service (IRS), the Secret Service, and the Financial Crimes Enforcement Network (FinCEN). The Postal Inspection Service, a leading federal law enforcement agency that investigates the theft of mail or use of the mail to defraud individuals or financial institutions. In obtaining information on the cost of identity theft to the financial services industry, we focused on three categories—(1) direct fraud losses, (2) staffing and operating cost of fraud departments, and (3) consumer confidence in online commerce. Generally, the scope of our work focused primarily on obtaining information from banks, two payment card associations (MasterCard and Visa), and the national consumer reporting agencies. We did not obtain information about fraud losses involving other general-purpose cards (American Express, Diners Club, and Discover), nor losses involving merchant-specific cards issued by retail stores. Furthermore, we did not obtain information from various other entities, such as insurance companies and securities firms, which may incur identity theft-related costs. Regarding direct fraud losses, we reviewed recent surveys of banks conducted by the American Bankers Association (ABA). For instance, one survey—Deposit Account Fraud Survey Report 2000—provided information about the percentages of total check fraud-related losses attributable to identity theft in 1999. However, we believe that the results from the ABA’s Report 2000 should be interpreted with caution. Although the ABA surveyed a national probability sample of all commercial and savings banks, the overall response rate—that is, the number of completed questionnaires divided by the number of sent questionnaires—was only 11 percent. The response rates stratified by bank size were as follows: 10 percent for community banks (assets under $500 million), the large majority of all banks. 16 percent for mid-size banks (assets of $500 million to under $5 billion). 27 percent for regional banks (assets of $5 billion to under $50 billion). 65 percent for superregional/money center banks (assets of $50 billion or more). Surveys with a low level of responses—particularly surveys with response rates lower than 50 percent—could be affected by nonresponse bias. In other words, if a survey has a low response rate, and if respondents are different in important ways from those who did not respond, the survey results could be biased. For instance, if banks with little or no fraud losses tend not to respond, then survey estimates about the percentage of banks nationwide that regard identify theft as a problem could be overstated. ABA staff did not conduct any follow-up analyses to find out whether the banks that responded were different from the banks that did not respond. ABA staff said that they were not concerned about the survey’s response rate because they believed that the survey had adequate coverage of banking industry assets and losses by virtue of having a good representation of large banks (i.e., regional banks and superregional/money center banks). The ABA staff noted, for instance, that most assets and dollar losses in the banking industry are with larger banks. Furthermore, regarding direct fraud losses, two major payment card associations (MasterCard and Visa) provided us with information on their identity theft-related fraud losses. As mentioned previously, we did not obtain information about direct fraud losses involving other general- purpose cards (American Express, Diners Club, and Discover), nor losses involving merchant-specific cards issued by retail stores. However, to obtain additional perspectives on direct fraud losses, we contacted the top 14 credit-card issuing banks. Six of the banks provided us with information. Generally, the other eight banks (1) chose not to respond, partly because of concerns about the release and use of proprietary information, or (2) asked that we seek to obtain the information from the Consumer Bankers Association. However, citing definitional differences among financial institutions, the Consumer Bankers Association was unable to provide us with information on identity theft-related fraud losses. Regarding staffing and cost of fraud departments, we obtained information from the ABA’s 2000 survey report and from the six banks, mentioned previously. Also, we contacted each of the three national consumer reporting agencies to discuss the staffing levels and the costs associated with the respective entity’s fraud or victim assistance department. Furthermore, regarding consumer confidence in online commerce, we conducted a literature search and reviewed relevant congressional hearings and testimony statements made by officials from FTC, the Department of Justice, and a major credit card issuer. Also, officials at five of the six banks we contacted offered comments about the impact of identity theft on consumer confidence in using e-commerce. In response to our inquiry, FTC staff provided us with statistical information on the types of nonmonetary harm (e.g., denied credit or other financial services) and monetary harm (e.g., out-of-pocket expenses) reported by identity theft victims. This information was based on complaints reported to the FTC’s Identity Theft Data Clearinghouse during the period November 1999 through June 2001. Furthermore, at our request and after obtaining the individuals’ consent, FTC staff provided us with the names and telephone numbers of a small cross section of identity theft victims (10 total) to interview. According to FTC staff, the 10 victims were selected to illustrate the range in the types of identity theft activities reported by victims. The experiences of these 10 victims are not statistically representative of all identity theft victims. Also, we reviewed and summarized information from a May 2000 report prepared by two nonprofit advocacy entities—the California Public Interest Research Group (CALPIRG) and the Privacy Rights Clearinghouse. The report presented findings based on a survey (conducted in the spring of 2000) of 66 identity theft victims who had contacted these organizations. As agreed with the requesters’ offices, to obtain estimates of the cost of identity theft to the criminal justice system, we focused on federal agencies only and did not attempt to quantify the cost of state and local law enforcement activities. Thus, our efforts focused on obtaining information about the cost associated with federal investigations, prosecutions, incarceration, and community supervision. Generally, we found that federal agencies do not maintain cost data specifically attributable to cases involving identity theft. Thus, as an alternative, we asked the agencies to provide us with average cost estimates based, for example, on white-collar crime cases—a category that covers financial crimes, including identity theft. Specifically, we contacted the following federal agencies: The FBI and the Secret Service were asked to provide data on the respective agency’s average cost of investigating white-collar crimes. The SSA/OIG was asked to provide an estimate for investigating cases involving SSN misuse. EOUSA was asked to provide data on the average cost of prosecuting white-collar crimes. The federal Bureau of Prisons was asked to provide data on the average cost of incarcerating felons convicted of white-collar crimes. The Administrative Office of the United States Courts was asked to provide data on the average cost of supervising white-collar crime offenders in the community. This appendix presents information about the prevalence of identity theft, that is, the extent or incidence of such theft. Some individuals do not even know that they have been victimized until months after the fact, and some known victims may choose not to report to the police, credit bureaus, or established hotlines. Thus, it is difficult to fully or accurately quantify the prevalence of identity theft. Some of the often-quoted estimates of prevalence range from one-quarter to three-quarters of a million victims annually. Usually, these estimates are based on limited hotline reporting or other available data, in combination with various assumptions regarding, for example, the number of victims who do not contact credit bureaus, the FTC, the SSA/OIG, or other authorities. Generally speaking, the higher the estimate of identity theft prevalence, the greater the (1) number of victims who are assumed not to report the crime and (2) number of hotline callers who are assumed to be victims rather than “preventative” callers. We found no information to gauge the extent to which these assumptions are valid. Additionally, there are no readily available statistics on the number of victims who may have contacted their banks or credit card issuers only and not the credit bureaus or other hotlines. As we reported in 1998, there are no comprehensive statistics on the prevalence of identity theft. Similarly, during our current review, various officials noted that precise, statistical measurement of identity theft trends is difficult due to a number of factors. The Secret Service noted, for instance, that identity theft is not typically a stand-alone crime; rather, identity theft is almost always a component of one or more crimes, such as bank fraud, credit card or access device fraud, or the use of counterfeit financial instruments. Nonetheless, while recognizing measurement difficulties, a number of data sources can be used as proxies or indicators for gauging the prevalence of such crime. These sources can include consumer complaints and hotline allegations as well as law enforcement investigations and prosecutions. Each of these various sources or measures seems to indicate that the prevalence of identity theft is growing. This appendix summarizes statistical and related information we obtained from the three national consumer reporting agencies (CRAs) that have call-in centers for reporting identity fraud or theft; the Federal Trade Commission (FTC), which maintains a database of complaints concerning identity theft; the Social Security Administration’s Office of the Inspector General (SSA/OIG), which operates a hotline to receive allegations of SSN misuse and program fraud; and federal law enforcement agencies—Department of Justice components, Department of the Treasury components, and the Postal Inspection Service—responsible for investigating and prosecuting identity theft- related cases. Statistics provided to us by the three national CRAs included the number and types of fraud alerts placed on consumers’ credit files, as well as the number of inquiries (call volume) received by the fraud units of the CRAs. Generally, fraud alerts constitute a warning that someone may be using the consumer’s personal information to fraudulently obtain credit. Thus, a purpose of the alert is to advise credit grantors to conduct additional identity verification or contact the consumer directly before granting credit. Due largely to increased public awareness about identity fraud, the number of inquiries received by the fraud units of CRAs is at an all-time high. For instance, a senior official of one CRA told us that his agency’s fraud unit experienced an 84-percent increase in inquires from 1998 to 2000. Now, the CRA official opined, virtually all individuals whose wallet or purse is lost or stolen will call a CRA as a precautionary measure. According to industry officials, individuals who suspect that they have been the victims of fraud will generally contact all three national CRAs rather than just one or two. Thus, industry officials told us that there probably is a high degree of overlap in each CRA’s respective fraud statistics. Also, the officials said that any large variations in reported statistics among the national CRAs are generally the result of different methods for classifying fraud-related inquiries. In obtaining statistics from the three national CRAs, we agreed to report the information in a manner not specifically identifiable to the respective agency. Thus, in the following sections, we refer to the three sources as “Agency A,” “Agency B,” and “Agency C.” Agency A: Number of Files Agency A officials provided us with trend statistics on the number of with Fraud Alerts individual credit files that had a 7-year fraud alert posted by the agency’s fraud victim assistance division. Regarding the total number of consumers helped by this division, the officials said that the number of fraud alert postings is a better indicator than the number of consumer contacts with the division. The officials explained that: The number of consumer contacts may include some double counting. For instance, the same consumer may call or write the fraud victim assistance division more than once. In contrast, for any given time period, the agency will post a fraud alert only once to an individual consumer’s file. Thus, there is no double counting in these statistics. Furthermore, the officials noted that, based on the agency’s best judgment and years of experience with 7-year fraud alert postings, the reasons for such postings can be grouped into three categories. About 50 percent of the postings are based on preventative calls from consumers rather than actual or verified instances of fraud. Generally, these consumers request a fraud alert from the standpoint of being “safe rather than sorry”—a preventative approach. Another 25 percent of the postings are based on credit card account takeovers. The agency does not define or consider these postings as involving “identity fraud.” The remaining 25 percent of the postings are based on identity fraud. Most of these instances involve fraudulent credit card applications. Using these groupings and estimated percentages, Agency A officials developed the 7-year fraud alert data presented in table 1. As indicated, the estimated number of consumers who had their credit files impacted by identity fraud increased about threefold in recent years—from an estimated 27,800 for calendar year 1995 to an estimated 89,000 for calendar year 2000. The most recent year’s estimated number (89,000 consumer files in 2000) represents an increase of about 36 percent over the 1999 number (65,600). Agency B provides its customers two types of fraud alerts—a temporary or 90-day security alert and a 7-year victim statement. A security alert requests that a creditor ask for proof of identification before granting credit in that person’s name. A victim statement provides telephone numbers supplied by the consumer and requests that creditors call the consumer before issuing credit in that person’s name. The officials explained that, if a consumer suspects a fraud-related problem, the individual is to initially call the agency’s automated voice response system, which generates a 90-day security alert on the respective credit file. Agency B officials emphasized to us that most of these initial calls are not indicators that the individuals have been actual victims of fraud. Rather, the officials noted that consumers may take action to generate a 90-day security alert for a variety of reasons, such as reaction to a media story on identity fraud; a desire for added protection from identity fraud; suspicion of a relative, coworker, neighbor, or other person; an effort to get out of a legitimate debt or financial obligation; or a host of other reasons not related to fraud. Also, after the 90-day security alert is generated, Agency B’s policy is to provide the consumer a free copy of his or her credit file. This policy, according to Agency B officials, is to help ensure that the consumer has a better-informed basis for considering his or her situation and the need for any further action or assistance. Upon receiving and reviewing the credit file copy, the consumer may then follow-up with the agency’s call center and speak to a fraud specialist to discuss any suspicious entries on the file. In so doing, the consumer can choose to make a “victim statement,” which will have the effect of extending the fraud alert from 90 days to 7 years. Agency B officials told us that the most reliable indicator of the true incidence of identity fraud that the agency could provide is the number of 7-year victim statements placed on consumer credit files. Relevant statistics (see table 2) provided to us by Agency B indicate that the number of 7-year victim statements increased about 53 percent in recent comparative 12-month periods; that is, the number increased from 19,347 during one 12-month period (July 1999 through June 2000) to 29,593 during the more recent period (July 2000 through June 2001). Agency B officials pointed out that these numbers are relatively small compared with the numbers of initial calls that generated the 90-day security alerts. For the more recent 12-month period, for example, the number of 7-year victim statements (29,593) equates to about 2.5 percent of the initial calls that generated 90-day security alerts. their credit files either by (1) using an automated voice response system and choosing the fraud option or (2) directly calling the fraud hotline and speaking with an operator at the agency’s Consumer Services Center. Then, after the consumers have had the opportunity to receive and review a copy of their files, they have the option of requesting that a longer-term fraud alert be placed on their files. The duration of such an alert can range from 2 to 7 years, at the discretion of the individual consumer. An Agency C official told us that the most reliable metric of fraud, including identity theft, is the number of files with the longer-term (2- to 7-year) fraud alerts. The official said that, in 2000, approximately 92,000 consumers called Agency C to place longer-term fraud alerts on their files. However, the official said that Agency C had no comparative statistics available for earlier years and, thus, could not make any observations about trends in the number of such fraud alerts. The official noted that many consumers who took action to have the longer-term fraud alerts placed on their files generally had some information—such as documentation from a credit grantor, a police report, or an affidavit—indicating that they were the victims of fraud. On the other hand, the official also noted that some consumers had no direct evidence that they were victims but were uncomfortable enough with the information on their credit files to request an extended (2- to 7-year) fraud alert. The official explained that Agency C does not require consumers to submit any particular type of evidence or information in order to have these longer-term fraud alerts placed on their files. The Identity Theft and Assumption Deterrence Act of 1998 requires the FTC to “log and acknowledge the receipt of complaints by individuals who certify that they have a reasonable belief” that one or more of their means of identification have been assumed, stolen, or otherwise unlawfully acquired. In response to this requirement, in November 1999, FTC established the Identity Theft Data Clearinghouse (the FTC Clearinghouse) to gather information from any consumer who wishes to file a complaint or pose an inquiry concerning identity theft. In November 1999, the first month of operation, the FTC Clearinghouse answered an average of 445 calls per week. By March 2001, the average number of calls answered had increased to over 2,000 per week. In December 2001, the weekly average was about 3,000 answered calls. At a congressional hearing in September 2000, an FTC official testified that Clearinghouse data demonstrate that identity theft is a “serious and growing problem.” Recently, during our review, FTC staff cautioned that the trend of increased calls to FTC perhaps could be attributed to a number of factors, including increased consumer awareness, and may not be due solely or primarily to an increase in the incidence of identity theft. From its establishment in November 1999 through September 2001, the Clearinghouse received a total of 94,100 complaints from identity theft victims. As table 3 shows, five states accounted for about 44 percent of the total complaints. Furthermore, the FTC data for November 1999 through September 2001 showed that FTC received 500 or more identity theft complaints from each of 13 cities. Of these, New York City had the highest number of complaints (3,916), followed by Chicago (1,620), Los Angeles (1,487), Houston (1,282), Miami (941), Philadelphia (695), San Francisco (621), Las Vegas (572), Phoenix (570), District of Columbia (542), San Diego (539), Dallas (537), and Atlanta (517). As table 4 shows, of the total identity theft complaints (94,100) reported to the FTC during November 1999 through September 2001, the majority of the victims (about 62 percent of the complaints) were unaware of the methods that the suspects had used to obtain the victims’ personal information, and in another 18 percent of the cases, this type of information was not collected. Of the remaining 19,241 complaints, or about 20 percent of the 94,100 total complaints reported to the FTC for the 23-month period, the victims provided the FTC information about the various methods used by suspects. FTC data indicated that in cases where the identity theft victim knew how the identity theft had occurred, “access through relationship with victim” (e.g., family member, neighbor, or coworker) was the most prevalent method used by suspects to obtain personal information. Specifically, this method accounted for 10,101 complaints for which the victim reported one or more methods used to obtain his or her personal information. Additional information about the 10,101 cases involving “access through relationship with victim” is presented in table 5. As shown, in 4,629 of the 10,101 cases where the victim knew the suspect, the victim and the suspect were family members. However, table 5 further indicates that the 10,101 cases represent less than 11 percent of the total 94,100 complaints received by the FTC during November 1999 through September 2001. SSA/OIG operates a Hotline to receive allegations of fraud, waste, and abuse. According to SSA/OIG officials, until about mid-February 2001, Hotline staff had no procedures for specifically categorizing any incoming calls as involving identity theft allegations. Rather, in recent years, the allegations most likely to involve identity theft were recorded by Hotline staff as either (1) SSN misuse or (2) program fraud, which may contain elements of SSN misuse potential. SSA/OIG officials explained these two categories of allegations as follows: Allegations of “SSN misuse” included, for example, incidents wherein a criminal used the SSN of another individual for the purpose of fraudulently obtaining credit, establishing utility services, or acquiring goods. Generally, this category of allegations does not directly involve SSA program benefits. On the other hand, allegations of fraud in SSA programs for the aged or disabled often entailed some element of SSN misuse. For example, a criminal may have used the victim’s SSN or other identifying information for the purpose of obtaining Social Security benefits. When Hotline staff received this type of allegation, it was to be classified in the appropriate program fraud category, which may also have SSN misuse potential. As shown in table 6, the number of Fraud Hotline allegations in both of these categories increased substantially in recent years. That is, the number of SSN misuse allegations increased more than fivefold, from 11,058 in fiscal year 1998 to 65,220 in fiscal year 2001, and the number of allegations of program fraud with SSN misuse potential more than doubled, from 14,542 in 1998 to 38,883 in 2001. To some extent, the increased number of allegations may be due to additional Fraud Hotline staffing, which increased from 11 to over 50 personnel during this period. However, SSA/OIG officials attributed the trend in allegations partly to a greater incidence of identity fraud. As mentioned previously, for most of the years shown in table 7, SSA/OIG had no procedures for specifically categorizing incoming calls as involving identity theft allegations. However, in 1999, SSA’s Office of the Inspector General analyzed a sample of SSN misuse allegations and determined that 81.5 percent of such allegations related directly to identity theft. The analysis covered a statistical sample of 400 allegations from a universe of 16,375 SSN misuse allegations received by the SSA/OIG Fraud Hotline from October 1997 through March 1999. The analysis did not cover the other category presented in table 6, that is, allegations of program fraud with SSN misuse potential. Recently, in about mid-February 2001, SSA/OIG implemented procedures to routinely and specifically determine which Fraud Hotline allegations of SSN misuse involve identity theft. For example, as table 7 shows, for 7 months (Mar. through Sept.) in 2001, the Fraud Hotline received 25,991 identity theft allegations, which are arrayed among 16 categories. As shown, the most prevalent identity theft category involved credit cards, which accounted for 9,488 allegations or almost 37 percent of the total identity theft allegations. The next highest identity theft category—about 4,600 employment-related allegations—usually involved illegal aliens, according to SSA/OIG officials. During this 7-month period, the number of identity theft allegations per month increased about 40 percent, from 3,028 in March 2001 to 4,258 in September 2001. Department of Justice Regarding Department of Justice law enforcement actions (e.g., number of Law Enforcement Components investigations, arrests, and prosecutions), we obtained identity theft- related statistics from the Executive Office for U.S. Attorneys (EOUSA) and the Federal Bureau of Investigation (FBI). For fiscal years 1996 through 2000, EOUSA provided us with statistics on the number of cases filed under federal statutes related to identity fraud. As indicated in table 8: The number of cases filed under 18 U.S.C. § 1028 reflect year-to-year increases and more than doubled from 314 cases in 1996 to 775 cases in 2000. The number of cases filed under 18 U.S.C. § 1029 reflect a general decrease, and the most recent figure—703 cases in 2000—is considerably lower than the 924 cases filed in 1996. The number of cases filed under 42 U.S.C. § 408 reflect a general increase. The number of cases filed increased substantially in 1998, when compared with the previous 2 years. And, the number of cases filed in 2000 was more than double the number filed in 1996. Also, in reference to table 8, EOUSA staff made the following clarifying comments: A given case may be counted under more than one of the three U.S. Code sections because a defendant could have been charged with multiple offenses. However, in table 8’s statistics for case filings, there is no double counting of multiple charges of the same Code section, nor of filings under the subsections of that section. For instance, if a defendant was charged with two counts of violations under 18 U.S.C. § 1028(a)(7) in one case, the relevant statistics would still appear as only one case under the 18 U.S.C. § 1028 column in table 8. EOUSA has only limited statistical information available at the subsection level or the sub-subsection level for offenses charged under title 18 of the U.S. Code. Except for certain firearms statutes, the case management system requests that cases be recorded under the U.S. Code section only, not under the subsection or the sub-subsection, although this additional information sometimes is provided. Thus, these “subsection-level or sub-subsection-level statistics” have great potential for underreporting. Also, cases involving identity theft or identity fraud are charged under a variety of different statutes, and many criminals who commit identity theft are charged under statutes relating to these defendants’ other crimes. With these significant limitations or caveats in mind, EOUSA data indicated that, of the 568 cases filed under 18 U.S.C. § 1028 in fiscal year 1999, the number of cases with at least one charge of a violation of subsection (a)(7) recorded in the EOUSA data base was 24 cases. And, for fiscal year 2000, of the 775 cases filed under 18 U.S.C. § 1028, the number of cases with at least one charge of a violation of subsection (a)(7) recorded in the EOUSA data base was 68 cases. At the time of our review, FBI officials told us that the agency did not have the capability to determine the number of statistical accomplishments (e.g., arrests and convictions) that have resulted from 18 U.S.C. § 1028(a)(7). The officials noted, however, that the agency was in the process of developing a system to track the number of cases that included identity theft as a component. Moreover, regarding case statistics that were presently available, the FBI officials offered the following contextual considerations: Even if accomplishments from investigative cases could be isolated or tracked to the 1998 act, these cases would not necessarily be an accurate reflection on this law. For instance, an open issue would be to determine if these cases would have been prosecuted using other equally beneficial statutes or not at all. Cases involving identity theft or identity fraud typically are classified by the crimes committed using the stolen fraudulent identity—classified, for example, as bank fraud, wire fraud, or mail fraud. In other words, an individual may not always be charged with identity theft but instead be charged with the substantive violations carried out using the stolen identity. As other possibilities, a prosecutor may allow an individual who was charged with identity theft to plead guilty to other criminal conduct charges. With these considerations in mind, the FBI provided us with statistics showing the agency’s accomplishments under identity theft-related statutes. Table 9 summarizes the statistics for fiscal years 1996-2001. As indicated, much of the FBI’s enforcement activities involved bank fraud cases, which is an area of longstanding responsibility for the FBI. Regarding Department of the Treasury law enforcement actions, we obtained identity theft-related statistics from the Internal Revenue Service (IRS), the Secret Service, and the Financial Crimes Enforcement Network (FinCEN). According to the IRS, many questionable refund schemes involve an element of identity theft or identity fraud. However, IRS emphasized that not all questionable refund schemes involve this element. For instance, IRS noted that many false returns are filed by the true taxpayer using false income documents (e.g., W-2s, W-2Gs, and Forms 4852 and 1099) with inflated income and/or withholding. IRS-Criminal Investigation does not routinely keep statistics as to how many questionable refund schemes and questionable returns involve some element of identity theft or identity fraud. Thus, IRS told us that it is difficult to determine the specific number of schemes, refunds, claims, and dollar losses that are solely attributable to identity theft or fraud. With these caveats in mind and in response to our request, IRS-Criminal Investigation’s Office of Refund Crimes developed statistics to reflect its “best effort to show the prevalence of identity fraud.” That is, for calendar years 1996 through 2000, IRS provided us with statistics covering all questionable refund schemes that IRS classified as involving a “high frequency” of identity theft or identity fraud—schemes very likely to have elements of this type of crime (see table 10). In 2000, for example, IRS detected a total of 3,085 such schemes, consisting of 35,185 questionable tax returns that claimed a total of $783 million in refunds. According to IRS officials, the agency’s detection efforts in that year prevented payment of $757 million. According to the Secret Service, the vast majority of financial crimes involve the use of some sort of false identification, the use of another individual’s personal or financial identifiers, or the assumption of a false or fictitious identity. In explanation, Secret Service officials noted the following: Broadly speaking, from the perspective of law enforcement, identity theft can involve either “account takeover” or “identity takeover.” That is, such theft involves the use of personal information to (1) make unauthorized use of existing credit or other financial accounts or (2) establish new accounts, apply for loans, etc. Generally, the personal information often sought by criminals is information required to obtain goods and services on credit. Primary types of this information include names, dates of birth, and SSNs. With the proliferation of computers and increased use of the Internet, many identity thieves have used information obtained from company databases and Web sites. Identity theft is not typically a “stand alone” crime. Rather, identity theft is almost always a component of one or more crimes, such as bank fraud, credit card or access device fraud, or the use of counterfeit financial instruments. In many instances, an identity theft case encompasses several different types of fraud. In further response to our inquiry, Secret Service officials said that they believe that identity theft continues to occur at a seemingly increasing pace. The officials cautioned, however, that the incidence of identity theft is difficult to measure on the basis of available statistics (such as number of investigations or arrests) for a variety of reasons. Among others, the reasons cited were lack of reporting by victims, classification of identity theft in other crime categories (e.g., theft or forgery) or perhaps as a civil matter, and different levels of law enforcement (federal, state, and local) having concurrent jurisdiction with respect to many aspects of identity theft. Given these limitations, the officials suggested that any assessment of overall trends regarding identity theft perhaps should be based on statistics from FTC—the agency designated to be the primary point of contact for victims. Nonetheless, we obtained available statistics from the Secret Service regarding its identity-theft related cases for fiscal years 1998-2000 (see table 11). In interpreting these data, Secret Service officials noted that, in recent years, the agency has moved away from investigating “street crime” level offenders in the identity theft spectrum to targeting individuals and groups engaged in the systematic, large-scale pursuit of profits through the commission of various types of identity theft. That is, the agency is now focusing on high-dollar, community-impact cases that merit federal interest. Case statistics for fiscal years 1998-2000 reflect this shift in focus, according to Secret Service officials, who noted the following: The number of arrests decreased 28 percent from 1998 to 2000, and the number of cases closed dropped 37 percent. On the other hand, the average actual losses to victims in closed cases rose 71 percent from 1998 to 2000. The average fraud losses prevented rose 48 percent from 1998 to 1999 and rose an additional 101 percent from 1999 to 2000. In April 1996, financial institutions were required to begin filing suspicious activity reports (SAR) to assist law enforcement in detecting and prosecuting violations of money laundering and other financial crimes.Recently, to “provide insights into the patterns of criminal financial activity associated with identity theft,” FinCEN analyzed SARs filed during the period April 1996 through November 2000—a total of 490,595 filings. Of this total, FinCEN’s analysis indicated that 1,030 SARs reported identity y theft. Analysis of these 1,030 SARs, according to FinCEN’s June 2001 report, confirms “industry perceptions of increases in both the incidence of identity theft-based fraud and SAR reporting about the phenomenon.”Specifically, FinCEN noted the following: During January through December 1997, the first full year of required SAR reporting, 44 instances of identity theft—fewer than 4 per month—were reported. Recently, during January through November 2000, there were 617 SARs filed that reported identity theft, an average of 56 SARs per month. Also, in its report, FinCEN noted—but did not elaborate or provide related statistics—that advanced technology (particularly the Internet) is proving to be a “powerful facilitator” of identity theft. “Inspection Service identity theft investigations increased by 67 percent since last year. Identity theft occurs when mail is stolen for the personal information it contains, which criminals use to fraudulently order credit cards, checks or other financial instruments. Mail theft may go unreported—the thief looks for mail containing items such as a credit card payment, copies personal identifiers and credit card and bank account information, and reseals the envelope and returns it to the mailstream, often undetected. Checks and credit cards may then be ordered in the victim’s name. Private mailboxes at commercial receiving agencies … are often rented so the crook can receive the fraudulently obtained cards and checks anonymously.” “Credit card theft and identity theft are becoming increasingly intertwined as crimes involving the U.S. Mail. The U.S. Postal Inspection Service’s Credit Card Mail Security Initiative has brought various federal law enforcement agencies and credit card industry representatives together since 1992 to discuss loss and theft issues and develop solutions. Many of the identity theft issues related to credit card losses are currently being addressed by members of the initiative. … “On November 6, 1999, President Clinton announced the Know Fraud initiative, a partnership of several leading private and government agencies, including the U.S. Postal Inspection Service, to educate consumers about how to protect themselves from telemarketing and mail fraud. … Although work continues on the first Know Fraud initiative, plans are underway for a second one to launch in early 2001. Focusing on identity theft, the goal of the new effort is to deliver to every home in America prevention information that will raise awareness of this growing trend and provide consumers with protective tactics.” According to the Postal Inspection Service, the “Know Fraud” initiative is “the largest consumer protection effort ever undertaken, with postcards sent to 123 million addresses across America, arming consumers with common sense tips and guidelines …” Postal Inspection Service arrest statistics indicate that the agency has increased its focus on identity theft-related crime in recent years (see table 12). For instance, whereas the annual number of arrests was relatively constant during fiscal years 1996 through 1999, the year 2000 total (1,722 arrests) represents an increase of about 36 percent over the previous year. Furthermore, the total for partial-year 2001 (9 months) is higher than the year 2000 total. According to industry data, the dollar value of goods and services purchased by consumers in the United States was $6.8 trillion in the year 2000. General purpose credit cards—American Express, Diners Club, Discover, MasterCard, and Visa—were used to pay for 20.4 percent of these consumption expenditures. MasterCard and Visa comprised about 76 percent of the U.S. card market share, based on first quarter 2001 data. Also, as members of the MasterCard and Visa associations, much of the banking industry engaged in issuing credit cards, as well as offering checking accounts. This appendix discusses identity theft and the financial services industry in reference to three categories or aspects of cost—direct fraud losses, staffing and operating cost of fraud departments, and consumer confidence in online commerce (i.e., e-commerce through the Internet). Regarding identity theft-related direct fraud losses incurred by the financial services industry, we obtained information from (1) the American Bankers Association (ABA); (2) the two leading payment card associations, MasterCard and Visa; and (3) six credit card-issuing banks. In its 2000 bank industry survey on check fraud, the ABA reported that total check fraud-related losses in 1999—considering both actual losses ($679 million) and loss avoidance ($1.5 billion)—against commercial bank accounts reached $2.2 billion, which was twice the amount in 1997.Regarding actual losses, the report noted that the 1999 figure ($679 million) was up almost 33 percent from the 1997 estimate ($512 million). In 1999, according to ABA data shown in table 13, the percentages of total check fraud-related losses attributable to identity theft ranged from 56 percent at community banks to 5 percent at superregional/money center banks. To restate, at the high end of this range, community banks reported that 56 percent of their check fraud-related losses could be attributed to identity theft; and at the low end of the range, superregional/money center banks reported that 5 percent of their check fraud-related losses could be attributed to identity theft. As previously mentioned, the ABA reported that check fraud-related losses totaled $2.2 billion in 1999. However, the ABA’s report did not specifically disaggregate this total among the bank- size categories shown in table 13. In the same report, banks surveyed by the ABA between February and June 2000 identified the leading threats against deposit accounts anticipated in the next 12 months. The leading threat category cited by the surveyed banks involved counterfeit checks, and this category was closely followed by concerns regarding debit cards, identity theft (true name fraud), and the Internet. The percentages of surveyed banks that ranked identity theft among the top three threats against deposit accounts, as shown in table 14, ranged from a low of 48.4 percent of community banks to a high of 75.8 percent of regional banks. MasterCard and Visa are separate associations owned by numerous financial institutions that issue payment cards (credit cards and debit cards) bearing the MasterCard name and the Visa name, respectively. As such, MasterCard and Visa rarely receive complaints of fraud directly from consumers. Rather, the fraud-related statistics that MasterCard and Visa report represent an aggregation of data reported by each association’s members. Association members report fraud-related statistics in various categories, such as account takeovers, fraudulent applications, lost cards, stolen cards, never-received cards, counterfeit cards, and mail order/telephone order fraud. Regarding these various categories, MasterCard and Visa use very similar (although not identical) definitions regarding which of these categories constitute identity theft, as opposed to other types of fraud. According to a MasterCard official, the identity theft-related categories are account takeovers and some portion of fraudulent applications. A Visa official said that two categories—account takeovers and fraudulent applications—are considered by Visa to be identity theft because the other forms of fraud do not necessarily require the “stealing” of another person’s identifying information. In response to our inquiry, MasterCard and Visa officials provided us with information on their respective association’s fraud-related dollar losses for calendar years 1996 through 2000. However, the officials considered this information to be proprietary and requested that we aggregate the data in our reporting rather than present association-specific data. We agreed. The associations’ aggregated data are presented in table 15. As indicated, for domestic (U.S.) operations, the associations’ identity theft-related fraud losses—defined as involving account takeovers and fraudulent applications—rose from $79.9 million in 1996 to $114.3 million in 2000, an increase of about 43 percent. Much of this increase is reflected in the account-takeover losses, which increased more than twofold, from $33.0 million in 1996 to $68.2 million in 2000. An official of one association said that this increase probably could be attributed to “inconsistencies in reporting among member banks.” The official added that consumers are not really at risk because a zero liability policy protects them from financial loss. Furthermore, table 15 shows that the associations’ identity theft-related losses as a percentage of total fraud losses were relatively constant at about 9 to 10 percent during 1996 through 2000. In further perspective, for most of these years, table 15 shows that the associations’ total fraud losses represented less than 1/10th of 1 percent of U.S. member banks’ sales volume. Generally, the fraud losses are borne by the financial institution that issued the payment card. In some instances, although reportedly rare, retail merchants may bear such losses if the merchants do not follow proper procedures for verifying use of the card. To reiterate, regarding direct fraud losses involving payment cards, we contacted MasterCard and Visa only. We did not obtain information about losses involving other general-purpose cards (American Express, Diners Club, and Discover), which account for about 25 percent of the market. Also, we did not obtain information about losses involving merchant- specific cards issued by retail stores. Furthermore, we did not obtain information from various entities, such as insurance companies and securities firms, which may incur identity theft-related costs. An official of one of the associations told us that identity theft is not perceived to be one of the biggest fraud-related problems faced by member banks. The official said that many banks have experience in dealing with identity fraud, including using new technology to detect where such fraud may be taking place. Additionally, to help reduce the incidence of fraud, the official noted that the association provides guidance or recommendations for member banks and merchants to follow, as well as a number of specific computer models and authorization and verification systems that help reduce fraud and identity theft. Officials of six credit card-issuing banks that we contacted said their financial institutions track fraud in several categories. But, we found some inconsistency among these institutions on the definition of credit card fraud associated with identity theft. For example, some financial institutions did not consider “friendly fraud” or “family fraud” in their fraud losses to be related to identity theft. However, two categories of identity theft-related fraud used by all six banks were (1) fraudulent applications and (2) account takeovers. Five of the six banks had data on identity theft losses involving fraudulent applications and account takeovers. These losses ranged from 18 percent to 42 percent of the respective bank’s overall fraud losses. However, bank officials acknowledged that identity theft could also be associated with lost or stolen payment cards or other categories of losses—and, thus, the reporting of losses for only two categories (fraudulent applications and account takeovers) may understate total identity theft-related losses. Officials from one of the six banks said that the amount of losses is not large, and the bank considered these losses to be within an acceptable level of risk. Also, the officials noted that the bank experienced more fraud from unauthorized use—that is, use of lost or stolen cards and forged checks—than from account takeovers and fraudulent applications. Officials from a second bank said that their bank’s largest source of credit card fraud was from lost or stolen credit cards. The officials added that the next most common form of fraud involved counterfeit credit cards—a type of fraudulent activity that occurred worldwide and often was perpetrated by organized crime rings. The third most common form of fraud—and more difficult to detect—was account takeover. The root cause of identity theft associated with account takeover, according to these bank officials, involved the misuse of SSNs acquired from another source. Also, this bank reported having experienced an increase in the number of cases of friendly fraud—that is, incidents whereby a victim’s family member or acquaintances obtained or tried to obtain credit in the victim’s name. For example, in a divorce situation, a spouse may have opened an account in his or her partner’s name without consent. Officials from a third bank said that the growth of fraud losses was correlated to business growth. However, the officials noted that the bank’s losses associated with identity theft had remained relatively constant during the last few years. Officials at a fourth bank said that the bank does not normally track identity theft. Rather, the bank tracked the number of fraudulent applications denied due to the suspicion of fraud. Regarding this category, the bank officials did not consider the number of incidents to be significant in relationship to the bank’s overall customer base; however, the officials noted that cases often occurred in “waves.” Moreover, the officials said that they were concerned with larger losses, which resulted from fraudulent activities perpetrated by organized crime rings. At a fifth bank, officials said that roughly 90 percent of the bank’s identity theft cases involved fraudulent applications, and the remainder represented account takeovers. The officials explained that, when the bank focuses on combating one form of fraudulent activity, other or replacement manifestations often begin to appear. For instance, the officials noted that fraud had increased from credit cards not received in the mail. In addition, the officials said they believed that fraudulent activity associated with organized crime rings was on the rise. At the sixth bank, officials provided no additional information about the institution’s fraud losses. The following sections discuss the staffing and cost of the fraud departments of banks and CRAs. The sections present information based on (1) ABA’s 2000 bank industry survey on check fraud, (2) responses from officials of various banks we contacted, and (3) our interviews with officials of the three national CRAs. In its 2000 bank industry survey on check fraud, the ABA reported that the amount of resources that banks devoted to check fraud prevention, detection, investigation, and prosecution varied as a direct function of bank size. For instance, as table 16 shows for check fraud-related operating expenses (not including actual losses) in 1999, over two-thirds (69.5 percent) of the 446 community banks that responded to ABA’s survey each incurred less than $10,000 for such expenses; about one-third (32.0 percent) of the 103 responding mid-size banks each incurred such expenses ranging from $50,000 to $249,999; about one-fourth (24.2 percent) of the 33 responding regional banks each incurred such expenses ranging from $500,000 to $999,999. Another one- fourth of the regional banks each incurred such expenses ranging from $1 million to $4.9 million; and about one-fourth (27.3 percent) of the 11 responding superregional/money center banks each incurred more than $10 million for such expenses. The six banks discussed earlier also responded to our questions about fraud department staffing. Bank officials expressed concern about the growing sophistication of identity thieves, and the officials indicated that their respective banks had taken a number of proprietary steps for preventing, detecting, and responding to fraud. The officials told us that fraud department staffing had increased over the last few years, both in relationship to the growth in business portfolios and to address increasing fraud losses. However, the officials said that they could not specifically quantify the fraud department costs associated with identity theft. Rather, the information provided to us can be summarized as follows: At four of the six banks, officials reported that fraud department staffing had expanded, with designated or specialized staff devoted to dealing with fraud prevention. The officials noted that their respective bank’s fraud prevention procedures were dynamic and proprietary. At a fifth bank, officials told us that about 30 percent of the fraud unit’s employees were associated with addressing identity theft. The officials added that the unit’s staffing had increased over the last 5 years, in line with the bank’s portfolio growth. However, the officials also said they had witnessed an increase in fraudulent applications—concurrent with an increase in Web site usage—and had taken additional preventative steps to address such applications. At the sixth bank, officials told us that fraud department staffing had remained relatively stable over the last 5 years. Moreover, in addition to fraud department staffing, various bank officials indicated that there were other indirect costs associated with addressing identity theft. Examples of such costs included the following: To assist in correcting credit bureau files, banks devote resources to communicating with customers and CRAs. Banks use resources in cooperating with law enforcement agents who investigate identity theft crimes. And, expenses are incurred in attempts to locate perpetrators, bill them, and collect owed amounts. Banks may incur lost opportunity costs in not being able to extend credit to legitimate customers. Officials from each of the three national CRAs told us that the number of fraud-assistance staff—that is, staff to answer telephone calls and correspondence from individuals who believed that they may have been the victims of fraud—had increased in recent years. In obtaining staffing information from the three national CRAs, we agreed to report the information in a manner not specifically identifiable to the respective agency. Thus, in the following sections, we refer to these sources as “Agency A,” “Agency B,” and “Agency C.” Of the three, Agency A and Agency C had a call center devoted specifically to fraud assistance. Agency B’s call center handled both fraud-related and nonfraud-related matters, such as various types of consumer inquiries and disputes. An Agency A official said that the number of staff in the agency’s fraud assistance department doubled in recent years, increasing from 50 in 1997 to 103 in 2001. In discussing the reasons for this increase, the official explained that greater public awareness of identity theft has resulted in a much larger volume of calls from consumers to the CRA. Now, the official opined, virtually any person who has a wallet or purse stolen will call a CRA as a protective measure against becoming a fraud victim. Moreover, the official said that Agency A’s operating policy is to have a sufficient number of fraud-assistance staff available so that consumers will be able to speak with someone when they first telephone. In contrast, the official noted that the other two CRAs have an automated response system for handling the initial telephone inquiries from consumers. Thus, the official said that Agency A has a greater number of fraud-assistance staff than the other two CRAs. According to this official, Agency A’s staffing costs for the fraud assistance department were about $3.3 million in 2000. Adding administrative costs to the staffing costs, the official said that the department’s total operating costs for the year exceeded $4 million. Agency B officials provided us with information that was more general or less specific than that provided by Agency A. That is, the officials said that: Agency B’s fraud-assistance staffing has increased in recent years and remained relatively steady at 30 to 40 fraud specialists in 2000 and 2001. The annual cost of maintaining a staff of fraud-assistance specialists is in the range of “several million dollars.” Also, in discussing Agency B’s automated response system for handling initial inquiries, the officials said that the system has the advantage of being available to consumers 24 hours a day, 7 days a week. The officials explained Agency B’s system as follows: When a consumer telephones the CRA, the automated system gives a menu of various options, one of which is a fraud-assistance option. If a consumer selects this option, Agency B automatically places a 90-day security alert on the consumer’s file. In addition to being provided a credit file report, the consumer is given a toll-free telephone number that the consumer can call to discuss—with Agency B fraud-assistance staff—the report and any related fraud concerns. In calling and discussing his or her situation, the consumer may choose to make a “victim statement,” which will have the effect of extending the fraud alert to a period of 7 years. Upon adding the victim statement, an updated credit report will be sent to the consumer, and two more reports will be provided at 45-day intervals. According to these officials, another advantage of Agency B’s automated response system for handling a consumer’s initial inquiry is that the credit file reports give the consumer a basis for subsequently having a more informed discussion with the agency’s fraud-assistance staff. Finally, the officials noted that the free reports—which total over 1 million annually— represent a significant but easily overlooked cost of identity fraud to CRAs. An Agency C official provided us with information on the approximate costs and hotline staffing levels for the fraud component of the agency’s Consumer Services Center. The official told us that the number of fraud operators at the Consumer Services Center had increased in the 1990’s but has remained relatively constant at about 30 to 50 individuals since 1997. The official said that the cost of salaries for these operators has been approximately $900,000 per year, with annual adjustments to reflect inflation and merit increases. Also, the official noted that other administrative expenses—such as computer costs, rent payments, etc.— would raise the cost higher. However, the official did not quantify these expenses. In describing Agency C’s inquiry process, the official explained that consumers could place temporary or 6-month fraud alerts on their credit files by (1) using the agency’s main automated toll free number and choosing the fraud option or (2) directly calling the fraud hotline and speaking with a fraud operator. According to this official: After temporary fraud alerts have been initiated, the consumers are automatically opted out of preapproved offers of credit. Additionally, the consumers receive free copies of their credit files. Upon reviewing their credit files, the consumers can contact a fraud operator and place a longer-term (2- to 7-year) fraud alert on their files. Consumer Confidence The following sections present (1) overview information about Internet in Online or E-Commerce fraud, (2) credit industry views regarding identity theft and consumer confidence in using e-commerce, and (3) statistical data showing continued growth in e-commerce. “While scams online are both new and old, free standing and combinations, the Internet itself creates a whole new set of problems and opportunities for law enforcement and for criminals. There are millions of people online, with thousands of new users every day. … here are now more e-mails sent every day than regular mail, including junk mail. Once a consumer goes online, he or she is bombarded with unsolicited commercial e-mail (spam) advertising everything from legitimate services to fraudulent investment schemes. Web sites abound offering both legitimate and fraudulent products and services.” “The Internet has dramatically altered the potential occurrence and impact of identity theft. First, the Internet provides access to identifying information through both illicit and legal means. The global publication of identifying details that previously were available only to a select few increases the potential for misuse of that information. Second, the ability of the identity thief to purchase goods and services from innumerable e-merchants expands the potential harm to the victim through numerous purchases. The explosion of financial services offered on-line, such as mortgages, credit cards, bank accounts and loans, provides a sense of anonymity to those potential identity thieves who would not risk committing identity theft in a face-to-face transaction.” “Internet fraud, in all of its forms, is one of the fastest-growing and most pervasive forms of white-collar crime. … Regrettably, criminal exploitation of the Internet now encompasses a wide variety of securities and other investment schemes, online auction schemes, credit- card fraud, financial institution fraud, and identity theft. … “A January 2001 study by Meridien Research … reports that with the continuing growth of e-commerce, payment-card fraud on the Internet will increase worldwide from $1.6 billion in 2000 to $15.5 billion by 2005. The Securities and Exchange Commission staff reports that it receives 200 to 300 online complaints a day about Internet-related securities fraud. Foreign law enforcement authorities also regard Internet fraud as a growing problem. Earlier this year, the European Commission reported that in 2000, payment-card fraud in the European Union rose by 50 percent to $553 million in fraudulent transactions, and noted that fraud was increasing most in relation to remote payment transactions, especially on the Internet. Similarly, the International Chamber of Commerce’s Commercial Crime Service reported that nearly two-thirds of all cases it handled in 2000 involved online fraud.” “Electronic commerce is vital to the U.S. economy and to the prospects for our continued economic growth. … There is no doubt that electronic commerce is a large, growing and permanent new channel for the sale of goods and services to consumers. The Department of Commerce estimates, for example, that online retail sales grew from less than $5.2 billion in the fourth quarter of 1999 to almost $8.7 billion in the same quarter one year later. Sales projections for the electronic commerce market range from $35 billion to $76 billion by the year 2002. By any measure, this counts as explosive growth … “Visa has taken steps to promote consumer confidence in this new channel of commerce. These steps include … zero liability policy for unauthorized use of our payment cards. … This zero liability policy applies to online transactions a well as offline transactions. Customers are protected online in exactly the same way as when they are using their cards at a store, ordering from a catalog by mail, or placing an order over the phone. In case of a problem, Visa provides 100 percent protection against unauthorized card use, theft, or loss. If someone steals a payment card number from one of our cardholders while the cardholder is shopping, online or offline, our customers are fully protected—they pay nothing for the thief’s fraudulent activity.” During our review, of the six credit card-issuing banks we contacted, five responded to our questions about the impact of identity theft on consumer confidence in using e-commerce. These responses can be summarized as follows: One of the five banks had recently conducted a focus group to assess the issue of consumer confidence in using e-commerce. Bank officials told us that most of the focus group participants expressed no concern about identity theft or fraud in conducting online banking or e-commerce transactions. In the credit card issuer’s experience, individuals over age 55 were more leery of online banking and e-commerce and were not as familiar with the technology. A second bank’s officials told us that many of the bank’s customers had an irrational fear of using e-commerce, or using credit cards for Internet transactions. The officials explained that, when fraud occurs, many customers were absolutely convinced the Internet was the root cause of the compromised information and the subsequent fraud, regardless of whether or not the Internet was actually used in the fraudulent transaction. A third bank had conducted focus groups on fraud and found that the largest concern voiced was identity theft. However, according to bank officials, this concern was not a major barrier to using e-commerce. At the fourth and fifth banks, officials did not have any information about consumers’ fears of identity theft from using online banking services or engaging in e-commerce transactions. However, officials from one of these banks noted that there was little basis in fact for such concerns. The officials explained that information transmitted to and from financial institutions for banking and other online transactions is encrypted; and, while there have been instances in which such information has been compromised, its misuse for identity theft purposes has been rare. Despite concerns about security and privacy, the use of e-commerce by consumers has steadily grown. For example, in the 2000 holiday season, consumers spent an estimated $10.8 billion online, which represented more than a 50-percent increase over the $7 billion spent during the 1999 holiday season. Furthermore, in 1995, only 130 banks and thrifts had a Web site; but, the number had grown to 4,600 by 2000. Similarly, in 1995, only one bank had a Web site capable of processing financial transactions; but, by 2000, a total of 1,850 banks and thrifts had Web sites capable of processing financial transactions. The growth in e-commerce could indicate greater consumer confidence but could also result from the increasing number of people who have access to and are becoming familiar with Internet technology. According to an October 2000 Department of Commerce report, Internet users comprised about 44 percent (approximately 116 million people) of the U.S. population in August 2000. This was an increase of about 38 percent from 20 months prior. According to Commerce’s report, the fastest growing online activity among Internet users was online shopping and bill payment, which grew at a rate of 52 percent in 20 months. In short, as more consumers become familiar with online products and services, e- commerce is likely to gain greater acceptance as a channel of commerce, and usage can be expected to increase further. Victims of identity theft may experience a range of costs that encompass nonmonetary harm as well as monetary losses. This appendix presents information about both of these cost categories. As mentioned previously, from its establishment in November 1999 through September 2001, the FTC Clearinghouse received a total of 94,100 complaints from identity theft victims. In response to our request, FTC staff provided us with information about the nonmonetary harm and the monetary losses (out-of-pocket expenses) reported by the complainants. The extent of the harm reported to the FTC depends upon the victims’ knowledge at the time that they call the FTC. Victims call the FTC at all stages of their experience with identity theft. Some victims call shortly after they discover the theft of their identities, while others may not hear about the FTC’s hotline and not call until months after they discover the crime. In addition, some victims discover the misuse of their identity soon after the misuse begins, while others do not discover it until years later. Moreover, the thieves may continue to misuse identities long after victims contact the FTC. For these reasons, the amount of harm that the victims are aware of and report at the time that they call the FTC may not be the full extent of the harm they have experienced or will experience. As table 17 shows, of the 94,100 identity theft complaints reported to the FTC during November 1999 through September 2001, about 14 percent involved reports of nonmonetary harm. By far the most prevalent type of nonmonetary harm cited by consumers—mentioned in over 7,000 complaints—was “denied credit or other financial services.” The second leading type of nonmonetary harm—cited in about 3,500 complaints—was “time lost to resolve problems.” In nearly 1,300 complaints, identity theft victims alleged that they had been subjected to “criminal investigation, arrest, or conviction.” As table 18 shows, FTC data indicated that 2,633 complaints received from November 1999 through September 2001 involved dollar amounts that victims reported as having been lost or paid as out-of-pocket expenses as a result of identity theft. While most financial institutions do not hold victims liable for fraudulent debts, victims may incur significant expenses in trying to restore their good names and financial health. According to FTC staff, for example, victims routinely incur costs for document copies, notary fees, certified mail, and long-distance calls. Some consumers have tax refunds or other benefits withheld pending resolution of the identity theft crime. In addition, some consumers have hired attorneys. Other consumers reported that they chose to pay the fraudulent debt because of difficulties encountered in trying to have the debt absolved. The FTC Clearinghouse had no data regarding direct out-of-pocket monetary losses (if any) for 77,063 (about 82 percent) of the 94,100 complaints received during November 1999 through September 2001. Also, for another 14,404 complaints, FTC data indicated that the individual victims reported zero dollar losses, that is, no out-of-pocket expenses. On the other hand, the data indicated that hundreds of complaints—2,633 in total during the 23-month period—reported at least some out-of-pocket expenses, with 207 of the complaints each alleging losses above $5,000 and another 203 complaints each alleging losses above $10,000. Out-of-pocket expenses may increase after victims report to the FTC and take further steps to resolve identity theft-related problems. From its database of identity theft victims, after obtaining permission from the individuals, FTC staff provided us with the names and telephone understanding of their experiences. As presented in table 19: In all 10 cases, the perpetrator used the victim’s personal information to engage in identity takeover activities. Varying by case, such fraudulent activities ranged from the opening of new charge accounts and cellphone accounts to obtaining employment and filing tax returns in the victim’s name. Also, in 2 of the 10 cases, the perpetrator engaged in account takeover activities; that is, the perpetrator made charges on existing accounts. Nine of the 10 victims reported experiencing both nonmonetary and monetary harms. Regarding nonmonetary harm, various victims reported being harassed by collection agencies, expending time to clear their names, having difficulty obtaining credit, and losing productivity at work. Furthermore, one victim reportedly was the subject of an arrest warrant, based on speeding tickets issued to the perpetrator, and another victim was taken into police custody for a drug-related search stemming from the perpetrator’s activities. Regarding monetary harm, the victims generally reported that out-of-pocket expenses were relatively low. However, two victims reported losing a job and wages (with losses of about $6,000 and $2,500 per victim, respectively), and two victims reported an inability to obtain tax refunds ($1,000 and $814, respectively). In addition to the types of harm presented in table 19, several of the victims expressed to us feelings of “invaded privacy” and “continuing trauma” that likely would affect their lives for quite some time. In particular, such “lack of closure” was cited if elements of the crime involved more than one jurisdiction and/or if the victim had no awareness of any arrest being made. For instance, two victims reported being able to file a police report in their state of residence but were unable to do so in other states where the perpetrators committed fraudulent activities using the stolen identities. Also, 2 of the 10 victims told us they were aware that the perpetrator had been arrested. In a May 2000 report, two nonprofit advocacy entities—the California Public Interest Research Group (CALPIRG) and the Privacy Rights Clearinghouse—presented findings based on a survey (conducted in the spring of 2000) of 66 identity theft victims who had contacted these organizations. The May 2000 report noted that victims of identity theft “face extreme difficulties attempting to clear the damaged credit, or even criminal record, caused by the thief.” According to the report, the following findings illustrate the obstacles that victims encounter when trying to resolve their identity theft cases: The victims spent 175 hours, on average, actively trying to resolve their identity theft-related problems. Less than half (45 percent) of the respondents believed that their cases had been fully resolved; these respondents reported an average of 23 months to reach resolution. The other survey respondents (55 percent) reported that their unresolved cases had already been open, on average, for 44 months. Not counting legal fees, victims reported spending between $30 and $2,000 on costs related to their identity theft. The average reported loss was $808, but most victims estimated spending $100 for out-of-pocket costs. The majority (76 percent) of the surveyed cases involved “true name fraud”—which occurred, for instance, when the imposter opened new credit accounts in the name of the victim. The number of fraudulent new accounts opened per victim ranged from 1 to 30, and the average was 6 new accounts. The May 2000 report stated that these findings may not be representative of the plight of all victims. Rather, the report noted that the findings should be viewed as “preliminary and representative only of those victims who have contacted our organizations for further assistance (other victims may have had simpler cases resolved with only a few calls and felt no need to make further inquiries).” Later, at a national conference, the Director of Privacy Rights Clearinghouse expanded on the results of the May 2000 report. For instance, regarding the 66 victims surveyed, the Director noted that one in six (about 15 percent) said that they had been the subject of a criminal record because of the actions of an imposter. Furthermore, the Director provided additional comments substantially as follows: Unlike checking for credit report inaccuracies, there is no easy way for consumers to determine if they have become the subject of a criminal record. Indeed, victims of identity theft may not discover that they have been burdened with a criminal record until, for example, they are stopped for a traffic violation and are then arrested because the officer’s checking of the driver’s license number indicated that an arrest warrant was outstanding. “This growing crime has a devastating effect on financial institution customers and a detrimental impact on the banks. Four of the top five consumer complaints regarding identity theft involve financial services—new credit card accounts opened, existing credit card accounts used, new deposit accounts opened, and newly obtained loans. Banks absorb much of the economic losses from bank fraud associated with the theft of their customers’ identities. Individuals who become victims of identity theft also pay, at a minimum, out-of-pocket expenses to clear their names and may spend numerous hours trying to rectify their credit records.” “Over the past five years, there has been a significant increase in crimes where criminals compromise personal identification data of victims, in order to commit identity theft. The information that falls into criminal hands includes name, date of birth, Social Security Number, banking account number, and other personal and financial information. “Victims of identity theft, like other crime victims, are made to feel personally violated. This is especially true in light of the vicious cycle of event that typically follows the perpetration of this crime. Imagine for a moment, a recently married couple just starting out in their life together. They work hard and save enough money to make a down payment on their first new home only to be denied a mortgage because of a negative payment history reflected in a credit report—information that they knew nothing about. The trauma of this type of fraud causes its innocent victims is unimaginable. Moreover, once the crime is discovered and reported, victims are left to fend for themselves in attempting to clear their credit history and good name. “Our unit has successfully conducted numerous investigations where perpetrators have used the personal information to not only obtain credit cards and personal loans, but also to purchase cars and homes. Although we in law enforcement garner some sense of satisfaction when we make arrests for these crimes, it is not enough when compared to the amount of time and energy a victim spends trying to undo the work of these criminals.” This appendix presents information about the cost of identity theft to the federal criminal justice system—that is, the cost associated with investigations, prosecutions, incarceration, and community supervision. Generally, we found that federal agencies do not separately maintain statistics on the person hours, portions of salary, or other distinct costs that are specifically attributable to cases involving 18 U.S.C. §1028(a)(7) and other criminal statutes that may be applicable to identity theft and fraud. Thus, as an alternative, some of the agencies provided us with average cost estimates based, for example, on white-collar crime cases—a category that covers financial crimes, including identity theft. Various Justice Department law enforcement agencies (e.g., the FBI), Treasury Department agencies (e.g., the Secret Service), and the Postal Inspection Service are responsible for investigating possible federal criminal violations in which identity theft or fraud is a factor. Also, the SSA’s Office of the Inspector General (OIG) may investigate possible identity theft and fraud cases where misuse or abuse of Social Security numbers (SSNs) is involved. Three of these agencies—the FBI, the Secret Service, and SSA/OIG—responded to our request for cost-related information, as discussed in the following sections. In response to our inquiry regarding the cost of investigating identity theft crimes, the FBI provided us with an estimate based on budget and workload data for the agency’s white-collar crime program for fiscal years 1998 to 2000. For this 3-year period, the FBI estimated that approximately $20,000 was the average cost of an investigative matter handled by the agency’s white-collar crime program. However, an FBI official noted that the agency does not have cost data related specifically to identity theft cases, and the official told us that the average-cost figure ($20,000) was not very meaningful given the following caveats: Using available data, the average cost of an investigative matter can be calculated in a number of different ways, none of which is perfect. Due to such imperfections, the validity of the $20,000 figure is highly questionable. For instance, the average cost figure does not capture the wide variance in the scope and costs of white-collar crime investigations. Some cases can be of short duration and involve only one FBI agent, whereas other cases can be very complicated, be ongoing for several years, and involve many agents. Also, it is questionable methodology for the FBI to apply the average cost of its white-collar crime investigations in general to identity theft cases specifically. Identity theft is rarely a stand-alone crime; that is, identity theft is frequently an element of bank fraud, wire fraud, and other types of white-collar or financial crimes. On the other hand, some white-collar or financial crimes, including some high-cost cases, may not involve elements of identity theft. However, the FBI’s information systems are not sufficiently code to isolate identity theft-related budget and workload data within the white-collar crime program. We asked the Secret Service for an estimate of the average cost of investigating financial crimes that included identity theft as a component. The Secret Service responded that the agency does not track costs on a per-case basis and noted that the nature and variety of factors regularly present in common investigative scenarios do not lend themselves to accurate “average cost” tracking. The agency explained that variants affecting cost include, but are not limited to, the number of personnel assigned, the use of technical and surveillance assets, transcription and translation services, case-related travel (domestic and foreign), task force expenses, expenditures for investigative information and evidence, expenditures associated with undercover activities, and trial preparation. In summary, the Secret Service responded that its cases vary so much in their makeup that to put a figure on average cost is not meaningful. Nonetheless, recognizing these caveats, the Secret Service’s Management and Organization Division made its “best estimate of the average cost” of a financial crimes investigation conducted by the Secret Service in fiscal year 2001. The resulting estimate was approximately $15,000. Secret Service officials noted that this estimate was for a financial crimes investigation and not specifically for an identity theft investigation. Also, the officials emphasized that, in the absence of specific guidelines establishing a standard methodology, average-cost figures provide no basis for making interagency comparisons. We asked SSA/OIG for an estimate of the average cost of investigating cases involving SSN misuse. SSA/OIG officials responded that the agency’s information systems do not record time spent by function to permit making an accurate estimate of what it costs to work these types of cases. Furthermore, the officials commented substantially as follows: Identity theft poses greater costs to the public and to financial institutions than to law enforcement. The cost of identity theft to law enforcement is a moving target. The cost can be small or large, depending on what priority SSN misuse is given in any law enforcement organization. In fact, SSA/OIG probably could dedicate its entire workforce to SSN misuse cases and still not scrape the surface of this issue. Finally, the SSA/OIG officials noted that the SSA/OIG’s appropriations for fiscal year 2001 totaled about $69 million; however, the officials reiterated the impracticality of estimating how much of this amount was used for investigating cases of SSN misuse. Executive Office for U.S. Attorneys (EOUSA) officials said that the agency’s timekeeping system could not specifically isolate the cost of prosecuting identity theft cases. The officials noted, however, that such cases generally are categorized as white-collar crimes, as are other types of financial crimes. According to EOUSA: U.S. Attorney Offices handled a total of 13,720 white-collar crime cases in fiscal year 2000. This total includes all white-collar crime cases that U.S. Attorney Offices dealt with in any manner during the year. That is, the total includes cases that were closed in the year, cases that were opened in the year, and cases that were still pending at yearend. The total cost associated with the 13,720 white-collar crime cases handled was $157 million in fiscal year 2000. Thus, the estimated average annual cost of prosecuting a white-collar crime case was $11,443. EOUSA emphasized that this figure was derived using a broad, inexact methodology. Furthermore, EOUSA emphasized that the figure was only an estimate and that the actual cost could be higher or lower. According to Bureau of Prisons (BOP) officials, federal offenders convicted of white-collar crimes generally are incarcerated in minimum- security correctional facilities. For fiscal year 2000, BOP officials told us that the cost of operating such facilities averaged $47.68 daily per inmate. Thus, on a monthly (30 days per month) and an annual basis (365 days per year), the respective cost figures would be $1,430 per inmate and $17,403 per inmate. Federal probation officers are responsible for the community supervision of federal offenders released from prison, as well as those placed on probation in lieu of a prison sentence. Each offender under supervision is assigned to a designated probation officer, whose responsibilities include (1) enforcing the conditions of supervision; (2) reducing the risk the offender poses to the community; and (3) providing the offender with access to treatment, such as substance abuse aftercare and mental health services. Offenders are typically supervised in the community for a period of 3 to 5 years. In response to our inquiry, AOUSC provided us average daily cost data covering all federal offenders under supervision. The average daily cost reported for fiscal year 2000 ranged from $8.02 for regular supervision to $31.46 for supervision that involved electronic monitoring and substance abuse treatment. An AOUSC official told us that white-collar offenders— including those who committed identity theft and do not need contract services—probably would fall into the regular supervision category. For this category, the average daily cost of $8.02 equates to about $2,900 annually per offender. According to AOUSC, regular supervision cost is based on the national average salary and benefits of a U.S. probation officer, plus additional costs associated with management, administrative support, training, and overhead (e.g., automation, space, telephone service, and travel). To report identity theft, follow the steps below as listed in the Identity Theft FTC Web site: (www.ftc.gov/opa/2002/02/idtheft.htm). 1. Contact the fraud departments of each of the three credit bureaus and report the thefts. 2. For fraudulently accessed accounts, contact the security department of the appropriate creditor or financial institution. 3. File a report with your local police or the police in the community where the identity theft took place. Get the report number or copy of the report in case the bank, credit card company, or others need proof of the crime. 4. Call the ID Theft Clearinghouse toll free at 1-877.438.4338 to report the theft. The Identity Theft Hotline and the ID Theft Web site (www.consumer.gov/idtheft) give you one place to report the theft to the federal government and receive helpful information. In addition to the above, David P. Alexander, Kay E. Brown, Heather T. Dignan, Nancy M. Eibeck, William Falsey, Debra R. Johnson, Shirley A. Jones, Harry Medina, Robert J. Rivas, Ronald J. Salo, and Donovan Wilson made key contributions to this report. | Identity theft involves "stealing" another person's personal identifying information, such as their Social Security number (SSN), date of birth, or mother's maiden name, and using that information to fraudulently establish credit, run up debt, or take over existing financial accounts. Precise, statistical measurement of identity theft trends is difficult for several reasons. Federal law enforcement agencies lack information systems to track identity theft cases. Also, identity theft is almost always a component of one or more white-collar or financial crimes, such as bank fraud, credit card or access device fraud, or the use of counterfeit financial instruments. Data sources, such as consumer complaints and hotline allegations, can be used as proxies for gauging the prevalence of identity theft. Law enforcement investigations and prosecutions of bank and credit card fraud also provide data. GAO found no comprehensive estimates of the cost of identity theft to the financial services industry. Some data on identity theft-related losses indicated increasing costs. Other data, such as staffing of the fraud departments of banks and consumer reporting agencies, presented a mixed or incomplete picture. Identity theft can cause victims severe emotional and economic harm, including bounced checks, loan denials, and debt collection harassment. The federal criminal justice system incurs costs associated with investigations, prosecutions, incarceration, and community supervision. |
DOD operates six geographic combatant commands, each with an assigned area of responsibility. Each geographic combatant command carries out a variety of missions and activities, including humanitarian assistance and combat operations, and assigns functions to subordinate commanders. Each command is supported by a service component command from each of the services, as well as a theater special operations command. The Departments of the Army, Navy, and Air Force have key roles in making decisions on where to locate their forces when they are not otherwise employed or deployed by order of the Secretary of Defense or assigned to a combatant command. In addition, the military departments allocate budgetary resources to construct, maintain, and repair buildings, structures, and utilities and to acquire the real property or interests in real property necessary to carry out their responsibilities. All of these entities play significant roles in preparing the detailed plans and providing the resources that the combatant commands need to execute operations in support of their missions and goals. EUCOM’s area of responsibility covers all of Europe, large portions of Asia, parts of the Middle East, and the Arctic and Atlantic Oceans. The command is responsible for U.S. military relations with NATO and 51 countries. EUCOM also supports the activities of more than 100,000 military and civilian personnel across 10.7 million square miles of land and 13 million square miles of ocean (see fig. 1). DOD’s facilities are located in a variety of sites that vary widely in size and complexity; some sites are large complexes containing many facilities to support military operations, housing, and other support facilities while other sites can be as small as a single radar site. DOD also organizes multiple sites under a single installation. For example, the Air Force base in Kaiserslautern, Germany is comprised of 45 sites that vary in terms of the number of personnel, number of buildings, and square footage. This base includes large sites like Ramstein Air Base and smaller sites like the Breitenbach radar site. To develop common terminology for posture planning, DOD has identified three types of installations that reflect the large-to-small scale of DOD’s enduring overseas posture-—main operating bases, forward operating sites, and cooperative security locations. Main operating bases are defined as overseas installations with relatively large numbers of permanently stationed operating forces and robust infrastructure that provide enduring family support facilities. DOD defines forward operating sites as scaleable installations intended for rotational use by operating forces, rather than supporting permanently stationed forces. Because they are scaleable, they may have a large capacity that can be adapted to provide support for combat operations, and therefore, DOD populations at these locations can vary greatly, depending on how they are used at any given time. Cooperative security locations are overseas installations with little or no permanent U.S. military presence, maintained with periodic service, contractor, or host-nation support. As with forward operating sites, DOD populations at these locations can vary greatly, depending on how they are used at any given time. The number of sites located in EUCOM’s area of responsibility has decreased as the post-Cold War security environment has changed; in 1990, the Army alone had over 850 sites throughout Europe. In the past decade, the total number of sites in EUCOM’s area of responsibility continued to decline, falling to 350 for all services in 2009. A hierarchy of national and defense guidance informs the development of DOD’s global posture. The National Security Strategy, issued by the President at the beginning of each new Administration and annually thereafter, describes and discusses the worldwide interests, goals, and objectives of the United States that are vital to its national security, among other topics. The Secretary of Defense then provides corresponding strategic direction through the National Defense Strategy. Furthermore, the Chairman of the Joint Chiefs of Staff provides guidance to the military through the National Military Strategy. On specific matters, such as global defense posture, DOD has developed new guidance in numerous documents, principally the 2008 Guidance for Employment of the Force and the 2008 Joint Strategic Capabilities Plan. The Guidance for Employment of the Force consolidates and integrates planning guidance related to operations and other military activities, while the Joint Strategic Capabilities Plan implements the strategic policy direction provided in the Guidance for the Employment of the Force and tasks combatant commanders with developing theater campaign, contingency, and posture plans that are consistent with the Guidance for Employment of the Force. The theater campaign plan translates strategic objectives to facilitate the development of operational and contingency plans, while the theater posture plan provides an overview of posture requirements to support those plans and identifies major ongoing and new posture initiatives, including current and planned military construction requirements. Figure 2 illustrates the relationships between these national and DOD strategic guidance documents. DOD guidance does not require EUCOM to include comprehensive information on posture costs in its theater posture plan and, as a result, DOD lacks critical information that could be used by decision makers and congressional committees as they deliberate posture requirements and the associated allocation of resources. DOD guidance requires that the theater posture plans prepared by each combatant command provide information on the inventory of installations in a combatant commander’s area of responsibility and estimates of the funding required for military construction projects in their theater posture plans, such as the $1.2 billion in military construction funding projected to build a new hospital in Landstuhl, Germany. However, this guidance does not specifically require, and therefore EUCOM does not report the total cost to operate and maintain DOD’s posture in Europe. Our analysis shows that operation and maintenance costs are significant. Of the approximately $17.2 billion obligated by the services to support DOD’s posture in Europe from 2006 through 2009, approximately $13 billion (78 percent) was for operation and maintenance costs. The military services project that operation and maintenance funding requirements will continue at about $3.2 billion annually for fiscal years 2011-2015. However, DOD has several efforts underway in areas such as planning for missile defense sites and determining the number and composition of Army brigades in Europe that could impact estimates of these future costs. DOD is drafting guidance to require more comprehensive cost estimates for ongoing, current, or planned initiatives and rough order of magnitude costs for newly proposed posture initiatives. These proposed revisions, however, will not require commanders to report operation and maintenance costs unrelated to posture initiatives at existing installations in the theater posture plan. Our prior work has demonstrated that comprehensive cost information— including accurate cost estimates—is a key component that enables decision makers to make funding decisions, develop annual budget requests, and to evaluate resource requirements at key decision points. Until DOD requires the combatant commands to compile and report comprehensive cost data DOD and Congress will have limited visibility into the cost of posture in Europe, which may impact their ability to make fully informed funding and affordability decisions. The 2008 Joint Strategic Capabilities Plan requires that theater posture plans prepared by each combatant command provide information on each installation in a combatant commander’s area of responsibility, to include identifying the service responsible for each installation, the number of military personnel at the installation, and estimates of the funding required for military construction projects. In accordance with these reporting requirements, EUCOM’s 2009 and 2010 theater posture plans provided personnel numbers, identified service responsibilities, and specified posture initiatives on installations within EUCOM’s area of responsibility, and estimated the funding required for proposed military construction projects for the current year and projected military construction costs over the next 5 years. However, the Joint Strategic Capabilities Plan does not specifically require the combatant commands to report estimates for other types of costs, such as costs associated with the operation and maintenance of DOD installations, in the theater posture plan. DOD’s operation and maintenance funding provides for a large number of expenses. With respect to DOD installations it provides for base operations support and sustainment, restoration, and modernization of DOD’s buildings and infrastructure. Base operations support funding can be used to pay for expenses such as recurring maintenance and repair, utilities, and janitorial expenses. Sustainment, restoration, and modernization funding is used to provide resources for maintenance and repair activities necessary to keep facilities in good working order. According to EUCOM officials, since operation and maintenance costs are not required to be reported by the Joint Strategic Capabilities Plan, EUCOM’s 2009 and 2010 theater posture plans do not contain estimates for the funding required to operate and maintain DOD’s installations or the approximately 310 other sites that comprise the services’ posture in EUCOM’s area of responsibility. To obtain a more comprehensive estimate of the cost of DOD’s posture in Europe we gathered obligations data from the Army, Navy, and Air Force related to military construction, family housing, and operation and maintenance appropriations for installations in the EUCOM area of responsibility and found that military construction and family housing obligations accounted for about one-fifth of the services’ total obligations against those appropriations from fiscal years 2006 through 2009. In total, over the period, the military services obligated about $17.2 billion to build, operate, and maintain installations in Europe, of which $3.8 billion (22 percent) was for military construction and $13.4 billion (78 percent) was for operation and maintenance of these installations. Of this $13.4 billion more than 50 percent was obligated for base operations support services which include hiring security forces to protect Army bases and obtaining utilities and janitorial services for installations (for a more detailed breakdown of costs at installations in Europe see fig. 6 in app. II). On average, the services reported they obligated approximately $4.3 billion annually for installations in EUCOM’s area of responsibility (see fig. 3). Our analysis of the data provided by the military services projects that operation and maintenance funding requirements will continue at about $3.2 billion annually for fiscal years 2011-2015. However, DOD has several efforts underway—in areas such as reviewing posture requirements and reducing overhead costs, planning for missile defense sites, and determining the number and composition of Army brigades in Europe— that may affect the precision of these projections. Reviewing Posture and Other Initiatives: DOD is reviewing its posture worldwide and has begun a series of efficiency initiatives focused on reducing overhead costs. These efforts include an examination of headquarters like those in Europe. Specifically, the Secretary of Defense has questioned why the Army, Navy, and Air Force service components in EUCOM are commanded by four-star general or flag officers, which can increase costs, given the support generally required for a 4-star command. Also, the Army is continuing its efforts to consolidate its posture in Europe, including an estimated $240 million requested for further upgrades to its facilities in Wiesbaden, Germany. Depending on the results of the DOD-wide global posture study and efficiency reviews, EUCOM and the services may have to revise their posture plans. Planning for European Ballistic Missile Defense: DOD has altered its plan to build missile defense sites in Poland and the Czech Republic in favor of a phased approach that relies on a combination of land- and sea- based defenses. DOD anticipates implementing this approach through 2020; however, DOD has not estimated the life-cycle cost of the phased adaptive approach for Europe. Keeping Army Brigades in Europe: In September 2010, we reported that delays in decisions associated with the number and composition of U.S. Army forces in Europe will impact posture costs. Prior to the 2010 Quadrennial Defense Review, the Army had planned to return two of four brigade combat teams stationed in Europe to the United States in fiscal years 2012 and 2013, which would have saved millions annually in overseas stationing costs by allowing the closure of two installations in Germany. However, these plans are on hold pending the results of ongoing DOD assessments of defense posture. Army analysis has concluded that the long-term incremental costs for keeping four brigades in Europe (rather than two) will be between $1 billion and $2 billion for fiscal years 2012 through 2021 depending on the assumptions used. To improve DOD’s reporting on global posture costs, we recommended, in July 2009, that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to develop a requirement and appropriate guidance for constructing an estimate of total global defense posture costs that reflects the basic characteristics of a credible cost estimate as defined in GAO’s Cost Estimating and Assessment Guide. In response to our recommendation DOD officials told us they are revising the Joint Strategic Capabilities Plan to require additional cost information in future theater posture plans. According to officials in the Office of the Under Secretary of Defense (Comptroller) and Joint Staff, the revised guidance would require the combatant commands to provide (1) current and projected full posture project costs for the next 5 years for planned posture initiatives (including construction, furniture, fixtures, equipment, and any operation and maintenance costs) and (2) the rough order-of-magnitude cost (including one-time and recurring costs, and cost to complete) for posture change proposals. As of November 2010, the revisions to this guidance had not been completed or approved within DOD. Although these proposed revisions would provide more comprehensive information on the cost to complete posture initiatives, they do not fully address our recommendation to compile total costs because they will not require the combatant commands to report, for each installation, the operation and maintenance costs unrelated to posture initiatives in conjunction with military construction costs by installation. These operation and maintenance costs comprise much of the financial obligations to support DOD’s overseas installations. By focusing this new guidance only on posture initiatives, DOD is overlooking operation and maintenance costs of installations and does not consider them when making posture decisions. However, these costs have been substantial, with DOD obligating about $3.4 billion annually in EUCOM's area of responsibility, as shown in figure 3. Our prior work has demonstrated that comprehensive cost information is a key component in enabling decision makers to set funding priorities, develop annual budget requests, and evaluate resource requirements at key decision points. We have developed a cost estimation process that, when followed, should result in reliable and valid cost estimates that management can use to make informed decisions. Furthermore, guidance from the Office of Management and Budget has highlighted the importance of developing accurate cost estimates for all agencies, including DOD. DOD and EUCOM officials acknowledge that the provision of more comprehensive cost data in the theater posture plans could be beneficial; EUCOM officials told us that having more comprehensive cost information would provide a better context for evaluating posture requirements. However, EUCOM officials said that they would have to rely on the service component commands to provide this information for inclusion in future theater posture plans. Until DOD requires the combatant commands to compile and report on comprehensive costs for established locations, DOD and Congress will be limited in their ability to weigh the costs and benefits of existing posture and posture initiatives and to make fully informed decisions on funding DOD’s posture in Europe. EUCOM has developed an approach to compile posture requirements and prepare annual theater posture plans, but does not have clearly defined methods for evaluating posture alternatives or routinely incorporating the views of interagency stakeholders. To support posture planning, EUCOM assigned primary responsibility for developing its theater posture plan to its Strategy Implementation Branch and established an Executive Council and supporting Integration Team. The council and integration team provide a forum for discussing posture issues that may cross service lines, such as issues concerning sites that are used by multiple services but supported by funding from a single service. In addition, EUCOM has undertaken a series of actions to work with the service component commands in developing its theater posture plan, such as holding a posture planning conference. Although the approach EUCOM has taken to determine posture requirements has fostered greater communication between key stakeholders and improved its ability to resolve conflicting views on posture issues, it has not been clearly defined and codified in command guidance, and it does not specifically provide for the comprehensive analysis of costs and benefits, because the combatant commander has not been required to include such analysis in developing the theater posture plan. In addition, the Interagency Partnering Directorate, which was established by the EUCOM commander to improve interagency coordination for the command, did not fully participate in developing the 2010 posture plan, because its role in posture planning has not been defined. As a result of these weaknesses in EUCOM’s posture planning approach, the command is limited in its ability to consider and evaluate the cost of posture in conjunction with the strategic benefits it provides and may not be fully leveraging interagency perspectives as it defines future posture requirements. In response to planning guidance established in the 2008 Guidance for Employment of the Force, EUCOM assigned primary responsibility to its Strategy Implementation Branch for developing the command’s theater posture plan and for coordinating outreach to the service components. In January 2009, the command also established the European Posture Executive Council—which includes one-star flag officer representatives from the command directorates, the service component commands, and the services’ installation management organizations—to focus on posture issues, including assessing strategy, prioritizing posture requirements, and determining the feasibility of implementing planned posture. According to EUCOM officials, the European Posture Executive Council has provided a forum for coordinating input from the service component commands and discussing and adjudicating posture issues that may cross service lines, such as issues concerning sites that are used by multiple services but supported by funding from a single service. To support the Executive Council, EUCOM established the European Posture Integration Team, a group of action officers that functions as a steering group for the council. The EUCOM Deputy Commander has also requested that the Strategy Implementation Branch develop a process to provide the component commands with a long-term vision for sites and functional capabilities needed to build partner capacity and other operational requirements for the next 10-15 years. According to EUCOM officials, the Deputy Commander wanted a method to provide the military services and the service component commands with a foundation to develop specific military construction programs and projects and to assist in the service component’s long-term plans to gain efficiencies by consolidating existing sites. The Strategy Implementation Branch identified the development of the theater posture plan as the best vehicle through which EUCOM’s vision for its posture could be communicated and coordinated with the service component commands. According to EUCOM officials, the development of the 2010 Theater Posture Plan began with a February 2010 EUCOM Posture Conference, which provided a forum for the EUCOM staff, the service components, and DOD organizations outside of EUCOM (such as other combatant commands, the Office of the Secretary of Defense, and the Joint Staff) to discuss EUCOM posture and EUCOM’s role in supporting national and regional strategic objectives. This conference was followed by a meeting of the European Posture Integration Team, discussions with other DOD organizations, and small group meetings among EUCOM staff. These meetings culminated in a Long Term Theater Posture Strategy conference, chaired by the EUCOM Deputy Commander, which included the EUCOM staff and service component deputy commanders. This conference included discussions of EUCOM’s posture planning assumptions—such as the status of the defense budget—and posture planning tenets—such as the need to develop posture plans in collaboration with other Geographic Combatant Commands. Additional steps taken to refine the posture plan included discussions with the Executive Council, and reviews by various directorates within the command. The resulting 2010 EUCOM theater posture plan presents a long-term posture view which will facilitate near-term posture discussions amongst the EUCOM staff and service components. Specifically, it details the force structure and infrastructure capabilities and requirements EUCOM needs to accomplish the programs, activities, and tasks as outlined within the Theater, Regional, and Functional Campaign Plans; Contingency Plans; and EUCOM Directorate, Component, and Special Operations Command Europe supporting plans. Included in the plan are overarching posture planning assumptions and tenets which are to be used as the basis for discussions held by the EUCOM Posture Executive Council. The theater posture plan also describes the current strategic context and conveys how EUCOM posture is linked to and supports strategic objectives. The theater posture plan informs the development of military service plans, the budgeting process, and DOD’s internal global defense posture planning efforts as well as external reports on DOD’s posture. Although EUCOM and the service components have taken these positive steps to identify posture requirements and develop the theater posture plan, the process being used to develop the plan has been ad hoc, and EUCOM officials stated they have not yet codified this process in command guidance. In addition, the roles of the Executive Council and Integration Team have not been clearly laid out in guidance. To provide some clarity regarding the roles of the Executive Council and Integration Team, the command is currently drafting an instruction that would assign the European Posture Executive Council primary responsibility for facilitating consensus on posture issues among EUCOM and the service components. We were provided an early draft of this instruction, and found it included criteria for selecting posture issues that should be deliberated within the Executive Council and established a process for service components to submit posture issues to the European Posture Executive Council and the European Posture Integration Team. While these are positive steps, the draft instruction did not provide comprehensive guidance on the process or steps involved in developing the theater posture plan. During the course of our work, EUCOM officials acknowledged that more comprehensive guidance describing the planning process, key steps involved, and roles and responsibilities of stakeholders would be necessary to institutionalize and ensure consistency in annual planning activities. They stated they were considering expanding the draft guidance to address these issues. As of December 2010, however, the instruction was still in draft and had not been approved. While EUCOM’s steps to date have improved its ability to obtain service component command input to the theater posture plan and provided a forum to consider posture issues, it has not yet developed a method to routinely analyze the costs and benefits of posture alternatives at the combatant command level as posture requirements are developed. As discussed earlier, current DOD guidance on theater posture plans does not require EUCOM to collect or report the total costs associated with DOD installations in Europe. Furthermore, this guidance does not require the combatant commands to analyze the costs of alternative courses of action when developing the theater posture plan or provide guidance on the types of cost analysis that should be completed. As for benefit analysis, the EUCOM theater posture plan makes reference to benefits gained from existing posture or those that may result from implementing proposed posture requirements. However, these benefits are often based on qualitative judgments on how requirements may assure allies, build partner capacity, or support operations in neighboring commands. The theater posture plan does not identify quantitatively comparable benefits or ways to measure those benefits, such as logistical improvements or shorter flying distances, nor does it apply operational metrics, such as specific measures of EUCOM’s ability to move forces through the region. Without comprehensive cost data and an objective way to measure benefits EUCOM does not yet have the data needed to routinely analyze the costs and benefits of posture alternatives. Since EUCOM has not developed a method to routinely analyze the costs and benefits of posture alternatives the Command may be missing opportunities to gain efficiencies in DOD’s posture. For example, U.S. Navy Europe officials told us they had identified excess capacity in some of their current posture locations, and were considering alternative courses of action to reduce posture costs. However, before they took steps to reduce their posture to gain greater efficiencies, Navy officials wanted to determine if other military services could use that excess Navy capacity to meet another service’s posture requirements. Only through their specific outreach efforts to other services were they able to identify a potential Air Force requirement that could be satisfied with the Navy’s location. These Navy officials commented that evaluations of posture at the combatant command level could potentially lead to further opportunities to gain greater efficiencies in posture investments made by the military services. Our work has shown that decision makers should complete comparative analysis of competing options that considers not only the life-cycle costs but also quantifiable and unquantifiable benefits. This evaluative information helps them make decisions about the programs they oversee—information that tells them whether, and in what important ways, a program is working or not working, and why. In addition, DOD and Army guidance related to economic analyses to support military construction projects or the acquisition of real property indicates that reasonable alternatives should be considered when contemplating such new projects. For example, DOD Instruction 7041.3, which applies to decisions about acquisition of real property, indicates that such analyses should address alternatives that consider the availability of existing facilities and the estimated costs and benefits of the alternatives, among other factors. Officials from EUCOM’s Strategy Implementation Branch stated that the theater posture planning process is a new and emerging process driven by recent changes to the Guidance for Employment of the Force and the Joint Strategic Capabilities Plan. While they agreed with our assessment that total posture costs should be part of any analysis of alternative courses of action, they stated the EUCOM command staff would have to rely on the service component commands to complete this type of analysis. EUCOM officials stated that, unless the Joint Strategic Capabilities Plan were to require this additional cost information, EUCOM would have difficulty obtaining it from the military services. Although the EUCOM Commander has identified building partner capacity as his top priority—activities that generally require close coordination with other U.S. government agencies—the command has not clearly defined specific steps to obtain input from interagency stakeholders as posture plans are developed. The 2010 Quadrennial Defense Review suggests that building partner capacity with efforts to improve the collective capabilities and performance of DOD and its partners will be a key mission area to support the objective of rebalancing the force. In March 2010, the EUCOM commander, in written testimony provided to the House and the Senate Armed Services Committees, indicated that building partnership and partner capacity is the command’s highest priority. DOD recognizes that building partner capacities and developing global defense posture require close collaboration with allies and partners abroad and with key counterparts at home, principally the civilian departments responsible for diplomacy and development. In addition, our prior work demonstrates that leading organizations involve stakeholders as they develop plans and requirements. The inclusion of stakeholders early and often can test and provide critical feedback on the validity of the assumptions made during a planning process. To enhance EUCOM’s ability to coordinate with other government agencies, the EUCOM commander established the Interagency Partnering Directorate in October 2009. As of November 2010, the directorate was comprised of approximately 30 staff—6 of whom were representatives from the Departments of State, Energy, and Treasury; Immigration and Customs Enforcement; Customs and Border Protection; and the Drug Enforcement Agency. According to the Deputy Director, discussions are underway to add representatives from the Department of Justice and the U.S. Agency for International Development. Despite the priority given to building partner capacity, and the recognized need to closely collaborate with non-DOD agencies and organizations to plan for and conduct those missions, the Interagency Partnering Directorate was not integral in the development of the 2010 EUCOM Theater Posture Plan. According to a senior directorate official, the directorate was not fully involved in the development of the theater posture plan because the organization was relatively new, and they are still trying to determine how this directorate can best plug into the various planning and other functions within the command. Similarly, a Strategy Implementation Branch official involved in the development of the theater posture plans commented that although EUCOM has been successful in bringing in interagency officials to the command, and has included the Interagency Partnering Directorate on the Executive Council, it has not defined how the interagency representatives can best participate in ongoing posture planning activities. According to that Strategy Implementation Branch official, EUCOM has not defined how it will routinely coordinate with the interagency community or how the interagency representatives can best support ongoing posture planning efforts. As a result, EUCOM officials involved in posture planning may not have full visibility into the activities of non-DOD agencies and organizations that could utilize DOD infrastructure and the interagency community may not be fully aware of the opportunities to leverage existing DOD facilities. Without guidance from the EUCOM Commander that clarifies the roles and responsibilities of the Interagency Partnering Directorate related to posture planning, and establishes a process through which interagency perspectives can be routinely obtained as posture plans are developed, EUCOM is limited in its ability to leverage DOD’s interagency partners’ expertise when developing its posture plans and may miss opportunities to fully leverage its posture investments to support a whole-of-government approach to missions and activities for building partner capacities. The nation’s long-term fiscal challenges have led DOD to examine the cost of its operations, including costs associated with its overseas network of infrastructure and facilities. DOD and EUCOM officials are taking positive steps to improve their posture planning efforts, but actions to date do not fully address posture cost and interagency issues. DOD is in the process of revising its Joint Strategic Capabilities Plan, but the draft revisions do not require combatant commanders to include comprehensive information on the cost to maintain existing locations, or to address the need for analyzing the cost and benefits of posture alternatives. Without further revisions to the Joint Strategic Capabilities Plan to address this lack of focus on the total cost of posture and analysis of alternatives, DOD’s posture planning process and reports will continue to lack complete information on the financial commitments and funding liabilities associated with DOD’s posture, and potential opportunities to obtain greater cost efficiencies may not be identified. In addition, since EUCOM is taking steps to address posture matters and is developing guidance for identifying and resolving posture issues within the command, it has an opportunity to use this guidance to clearly define and codify a process for how the theater posture plan is to be drafted, to establish approaches to collect and analyze comprehensive cost information and address affordability issues, and to regularly obtain the perspectives of relevant agencies throughout the posture planning process. Without such guidance, EUCOM will remain limited in its ability to analyze posture alternatives and collaborate with interagency partners when developing its posture requirements. Such guidance would allow EUCOM to develop a more informed understanding of the potential impacts of posture requirements and to set priorities among competing investments before asking the department to expend resources or Congress to appropriate needed funds. To provide for more comprehensive information on the cost of posture and analysis of posture alternatives as future theater posture plans are developed, we recommend that the Secretary of Defense direct the Chairman, Joint Chiefs of Staff, to revise the Joint Strategic Capabilities Plan to require the theater posture plan to include the cost of operating and maintaining existing installations and estimate the costs associated with initiatives that would alter future posture and provide guidance on how the combatant commands should analyze the costs and benefits of alternative courses of action when considering proposed changes to posture. To ensure that EUCOM clearly defines a process for developing its theater posture plan, including compiling posture costs, considering affordability, and regularly obtaining the perspectives of relevant agencies throughout the posture planning process, we recommend that the Secretary of Defense, through the Chairman of the Joint Chiefs of Staff, direct the EUCOM Commander to take the following three actions: Define the roles and responsibilities of the European Posture Executive Council and Integration Team in the posture planning process and development of the theater posture plan. Develop a process through which interagency perspectives can be obtained throughout the posture planning process and the development of the theater posture plan. Issue guidance to codify the EUCOM posture planning process once the above steps have been taken. In written comments on a draft of this report, DOD generally agreed with all of our recommendations. DOD’s response appeared to acknowledge that understanding the full cost of posture is an important consideration as DOD deliberates decisions on current and future posture requirements, and the actions it has taken or plans to take should provide a greater understanding of DOD posture costs. However, we believe some additional steps are warranted to fully address our recommendations. Technical comments were provided separately and incorporated as appropriate. The department’s written comments are reprinted in appendix III. DOD agreed with our recommendation that the Joint Strategic Capabilities Plan be revised to require the combatant commanders to include the cost of operating and maintaining existing installations and estimate the costs of its initiatives in future theater posture plans, but its proposed actions are not fully responsive to our recommendation. In its response, the department stated that it recognizes that the costs associated with operating and maintaining overseas facilities are an important consideration in the decision-making process, and that the current draft 2011 Joint Strategic Capabilities Plan requires that theater posture plans include operation and maintenance costs for current and planned posture initiatives. The department also indicated that the combatant command should include in the theater posture plan operation and maintenance costs when they are known. In instances where operation and maintenance costs are not known but required for oversight and decision making, DOD stated that it will require the military services to provide the needed data. While the proposed actions would be positive steps, the department’s plan to include operation and maintenance costs when they are known—or require additional data only when needed for decision making—could result in fragmented cost information. Therefore, we continue to believe that DOD should revise the Joint Strategic Capabilities Plan to require posture plans to include the cost of operating and maintaining existing installations, even when those costs are unrelated to a specific posture initiative. In response to our recommendation that the Joint Strategic Capabilities Plan be revised to provide guidance on how the combatant commands should analyze the costs and benefits of alternative courses of action when considering proposed changes to posture, DOD agreed, stating that the department uses four cost/benefit criteria in evaluating posture change proposals and that these four criteria should be used by the combatant commanders in analyzing alternative courses of action. Identifying the criteria that should be used in analyzing alternative courses of action is important, but the absence of detailed guidance within the Joint Strategic Capabilities Plan itself on how those criteria should be applied by the combatant commanders could lead to inconsistent application of the criteria, making it difficult for decision makers to evaluate alternatives. Therefore, we believe that DOD needs to take the additional step to revise the Joint Strategic Capabilities Plan to provide guidance on how the combatant commands should apply the criteria to analyze the costs and benefits of alternative courses of action. Regarding our recommendation that the EUCOM Commander define the roles and responsibilities of the European Posture Executive Council and European Posture Integration Team in the posture planning process and development of the theater posture plan, DOD agreed, stating that EUCOM’s Theater Posture Plan defines the roles and responsibilities of the Posture Executive Council and Posture Integration Team, and provided additional specifics, which were consistent with the information contained in our report. In addition, in response to our fifth recommendation, DOD agreed to incorporate those roles and responsibilities in command guidance. Therefore, if fully implemented, we believe DOD’s actions should meet the intent of our recommendation. DOD agreed with our recommendation that the Secretary of Defense, through the Chairman of the Joint Chiefs of Staff, direct the EUCOM Commander to develop a process to obtain interagency perspectives throughout the posture planning process and development of the theater posture plan. DOD stated that such a process currently exists and is documented in EUCOM’s Theater Posture Plan, which states that the EUCOM Posture Executive Council coordinates with interagency partners through the Interagency Partnering Directorate and the EUCOM Civilian Deputy to the Commander. As discussed in this report, we acknowledge the EUCOM initiative to establish the Interagency Partnering Directorate, and modified the report to clarify that the EUCOM Posture Executive Council includes the Interagency Partnering Directorate. However, as we also point out in the report, this coordination approach was not fully effective in the development of the 2010 Theater Posture Plan, directorate officials were still trying to determine how best to participate in various planning activities, and a EUCOM Strategy Implementation Branch official believed the command has not defined how the interagency representatives can best support ongoing posture planning efforts and routinely coordinate with the interagency community. Therefore, we believe that EUCOM needs to take the additional steps to establish a process through which interagency perspectives can be routinely obtained throughout the posture planning process, and institutionalize that approach in the posture planning guidance that as of December 2010 was still in draft form. DOD also agreed with our recommendation that the Secretary of Defense, through the Chairman of the Joint Chiefs of Staff, direct the EUCOM Commander to issue guidance to codify the EUCOM posture planning process. In its comments, DOD noted that the EUCOM Theater Posture Plan and draft command directive provide roles, responsibilities, and guidance for posture development while also identifying EUCOM-specific procedures that enable EUCOM to complete a variety of tasks. As we reported, we reviewed EUCOM’s 2010 Theater Posture Plan as well as an early draft of EUCOM’s Directive 56-24 and found that they included criteria for selecting posture issues that should be deliberated within the executive council and established a process for service components to submit posture issues to the executive council and the integration team. However, neither document provided comprehensive guidance on the process or steps involved in developing the theater posture plan and EUCOM’s posture requirements or a process through which interagency perspectives can be routinely obtained. Consequently, we believe that EUCOM needs to take the additional steps to finalize this guidance and modify its contents so that it addresses these weaknesses. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 19 days from the date of this letter. In addition, this report will be available at no charge on our Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (404) 679-1816 or pendletonj@gao.gov. Contact points for our Offices of Congressional Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To determine the extent to which U.S. European Command (EUCOM) estimates and reports the total cost of the Department of Defense’s (DOD) installations in its theater posture plan, we collected information by interviewing and communicating with officials in the Office of the Under Secretary of Defense for Policy, the Under Secretary of Defense (Comptroller), the Deputy Under Secretary of Defense (Installations and Environment), and the Joint Staff; Department of the Army, Department of the Navy, and Department of the Air Force; EUCOM; and the Army, Navy, and Air Force component commands and installation management entities for the Army and Navy service components within EUCOM. Additionally, we reviewed documentation including the 2009 and 2010 DOD Global Defense Posture Reports to Congress including the sections addressing posture costs, sections of the 2008 EUCOM Theater Campaign Plan; sections of the 2009 and 2010 EUCOM Theater Posture Plans; and departmental guidance and directives on command functions, campaign planning, overseas force structure changes, and global defense posture management. We also reviewed budget documentation including the military construction appropriations component of the President’s Budget request for fiscal years 2006-2011. Furthermore, we issued three separate data requests asking for obligations and requirements data on military construction appropriations and operation and maintenance appropriations for fiscal years 2006-2015. We submitted the first data request to each of three military services (Army, Navy, and Air Force) and the second and third data requests to three service component commands in EUCOM’s area of responsibility asking them to review and validate the data received through prior data requests. The first and second data requests were transmitted prior to the release of the Fiscal Year 2011 President’s Budget request and the third was transmitted following the release of the Fiscal Year 2011 budget. When we received these data, we aggregated and assessed them. To assess the reliability of received cost data, we reviewed data system documentation and obtained written responses to questions regarding the internal controls on the systems. We determined that the cost data we received were sufficiently reliable for the purposes of this report. To ensure the accuracy of our analysis, we used Statistical Analysis Software (SAS) when analyzing the data and had the programming code used to complete those analyses verified for logic and accuracy by an independent reviewer. Furthermore, we reviewed previous GAO reporting on overseas basing, military construction, the uses of cost information when making decisions about programs, and guidance on cost estimating and the basic characteristics of credible cost estimates. To determine the extent to which EUCOM has clearly defined methods for evaluating posture alternatives and including the views of interagency stakeholders, we reviewed departmental guidance and directives on command functions, campaign planning, overseas force structure changes, and global defense posture management. Additionally, we reviewed the 2008 EUCOM Theater Campaign Plan; the 2009 and 2010 EUCOM Theater Posture Plans; and the section of the 2010 Quadrennial Defense Review Report that addresses global defense posture matters. We also reviewed management practices established by the GAO Cost Estimating and Assessment Guide and DOD and military service guidance to inform our audit. Furthermore, we collected information by interviewing officials in the Office of the Under Secretary of Defense for Policy, the Under Secretary of Defense (Comptroller), the Deputy Under Secretary of Defense (Installations and Environment), and the Joint Staff; Department of the Army, Department of the Navy, and Department of the Air Force; EUCOM; the Army, Navy, and Air Force component commands and installation management entities for the Army and Navy service components within EUCOM; and Department of State. We reviewed DOD and service guidance on completing economic analyses and analyses of alternatives, and DOD guidance on collaborating with other government agencies. We also reviewed previous GAO reporting related to performance measurement and evaluation and challenges to interagency collaboration. We conducted this performance audit from October 2009 through December 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To obtain a more comprehensive estimate of the cost of the Department of Defense’s (DOD) posture in Europe, we requested information from the Army, Navy, and Air Force on military construction, family housing, and operation and maintenance appropriations for installations under their responsibility. The three service components responded with obligation figures for the three appropriation categories for the period fiscal year 2006 through fiscal year 2009. Additionally, the three service components provided estimated requirements for the three appropriation categories for the period fiscal year 2011 through fiscal year 2015. There are limitations associated with our data call including (1) the omission of supplementary funding provided to support ongoing operations; (2) the omission of costs reimbursed by tenant organizations, such as the Defense Logistics Agency, at installations in EUCOM’s area of responsibility; (3) the omission of personnel costs for troops stationed at installations in EUCOM’s area of responsibility; and (4) the omission of costs stemming from the presence of U.S. Africa Command. However, we discussed these limitations with officials in the Office of the Under Secretary of Defense for Policy, the Office of the Under Secretary of Defense (Comptroller), and the Office of the Under Secretary of Defense (Installations and Environment) and EUCOM officials and determined that the cost data we received were sufficiently reliable for the purposes of this report. Our analysis of obligations data indicates the Army constituted 52.2 percent of all obligations for the period fiscal year 2006 through fiscal year 2009, the largest proportion of the three service components. However, the Army has been faced with a significant challenge to meet the facility needs associated with several recent initiatives, such as the transformation of the Army’s force structure, the permanent relocation of thousands of overseas military personnel back to the United States, the implementation of base realignment and closure actions, and the planned increase in the Army’s active-duty end strength. Taken together, the Army estimated that these initiatives would result in a threefold increase in the Army’s military construction program for fiscal years 2006 through 2009. The Air Force The Air Force and Navy comprised 38.1 percent and 9.8 percent of obligations, and Navy comprised 38.1 percent and 9.8 percent of obligations, respectively. (See fig. 4.) respectively. (See fig. 4.) Furthermore, our analysis shows that Army operation and maintenance obligations for the same period totaled $6.5 billion, or 48.1 percent, of the approximately $13.4 billion in total operation and maintenance obligations. The Air Force and Navy comprised 42.3 percent and 9.6 percent of obligations, respectively. (See fig. 5.) In addition to the contact named above, Robert L. Repasky, Assistant Director; Brian Hackney; Joanne Landesman; Ying Long; Greg Marchand; Charles Perdue; Terry Richardson; Ophelia Robinson; Michael Shaughnessy; Amie Steele; Alex Winograd; and Ricardo Marquez made key contributions to this report. Defense Management: Improved Planning, Training, and Interagency Collaboration Could Strengthen DOD’s Efforts in Africa. GAO-10-794. Washington, D.C.: July 28, 2010. Defense Management: U.S Southern Command Demonstrates Interagency Collaboration, but Its Haiti Disaster Response Revealed Challenges Conducting a Large Military Operation. GAO-10-801. Washington, D.C.: July 28, 2010. Defense Planning: DOD Needs to Review the Costs and Benefits of Basing Alternatives for Army Forces in Europe. GAO-10-745R. Washington, D.C.: September 13, 2010. National Security: Interagency Collaboration Practices and Challenges at DOD’s Southern and Africa Commands. GAO-10-962T. Washington, D.C.: July 28, 2010. Defense Infrastructure: Planning Challenges Could Increase Risks for DOD in Providing Utility Services When Needed to Support the Military Buildup on Guam. GAO-09-653. Washington, D.C.: June 30, 2009. Force Structure: Actions Needed to Improve DOD’s Ability to Manage, Assess, and Report on Global Defense Posture Initiatives. GAO-09-706R, July 2, 2009. Military Operations: Actions Needed to Improve DOD’s Stability Operations Approach and Enhance Interagency Planning. GAO-07-549. Washington, D.C.: May 31, 2007. Defense Management: Comprehensive Strategy and Annual Reporting Are Needed to Measure Progress and Costs of DOD’s Global Posture Restructuring. GAO-06-852. Washington, D.C.: September 13, 2006. Defense Infrastructure: Guam Needs Timely Information from DOD to Meet Challenges in Planning and Financing Off-Base Projects and Programs to Support a Larger Military Presence. GAO-10-90R. Washington, D.C.: November 13, 2009. Defense Infrastructure: DOD Needs to Provide Updated Labor Requirements to Help Guam Adequately Develop Its Labor Force for the Military Buildup. GAO-10-72. Washington, D.C.: October 14, 2009. Ballistic Missile Defense: Actions Needed to Improve Planning and Information on Construction and Support Costs for Proposed European Sites. GAO-09-771. Washington, D.C.: August 6, 2009. Defense Management: Actions Needed to Address Stakeholder Concerns, Improve Interagency Collaboration, and Determine Full Costs Associated with the U.S. Africa Command. GAO-09-181. Washington, D.C.: February 20, 2009. Defense Infrastructure: Opportunity to Improve the Timeliness of Future Overseas Planning Reports and Factors Affecting the Master Planning Effort for the Military Buildup on Guam. GAO-08-1005. Washington, D.C.: September 17, 2008. Force Structure: Preliminary Observations on the Progress and Challenges Associated with Establishing the U.S. Africa Command. GAO-08-947T. Washington, D.C.: July 15, 2008. Defense Infrastructure: Planning Efforts for the Proposed Military Buildup on Guam Are in Their Initial Stages, with Many Challenges Yet to Be Addressed.GAO-08-722T. Washington, D.C.: May 1, 2008. Defense Infrastructure: Overseas Master Plans Are Improving, but DOD Needs to Provide Congress Additional Information about the Military Buildup on Guam. GAO-07-1015. Washington, D.C.: September 12, 2007. | In 2004, the Department of Defense (DOD) announced sweeping changes to restructure U.S. military presence overseas and reduce military posture in Europe. In August, 2010, the Secretary of Defense called for a review of DOD operations and activities to identify opportunities to decrease costs in order to free funds to support other DOD priorities. The Senate Appropriations Subcommittee on Military Construction and Veterans' Affairs asked GAO to determine the extent to which the European Command (EUCOM) (1) estimates and reports the total cost of DOD's installations in Europe and (2) has defined methods for evaluating posture alternatives and including the views of interagency stakeholders in its posture planning process. To address these objectives, GAO assessed DOD plans and guidance, reviewed planning efforts in EUCOM, and collected obligations data from the military services for the military construction, family housing, and operation and maintenance appropriations. DOD posture planning guidance does not require EUCOM to include comprehensive cost data in its theater posture plan and, as a result, DOD lacks critical information that could be used by decision makers as they deliberate posture requirements. DOD guidance requires that theater posture plans provide specific information on, and estimate the military construction costs for, installations in a combatant commander's area of responsibility. However, this guidance does not require EUCOM to report the total cost to operate and maintain installations in Europe. GAO analysis shows that of the approximately $17.2 billion obligated by the services to support installations in Europe from 2006 through 2009, approximately $13 billion (78 percent) was for operation and maintenance costs. Several factors--such as the possibility of keeping four Army brigades in Europe instead of two--could impact future costs. DOD is drafting guidance to require more comprehensive cost estimates for posture initiatives; however, these revisions will not require commanders to report costs, unrelated to posture initiatives, for DOD installations. GAO's prior work has demonstrated that comprehensive cost information is critical to support decisions on funding and affordability. Until DOD requires the combatant commands to compile and report comprehensive cost data in their posture plans, DOD and Congress will be limited in their abilities to make fully informed decisions regarding DOD's posture in Europe. EUCOM has developed an approach to compile posture requirements, but it does not have clearly defined methods for evaluating posture alternatives or routinely incorporating the views of interagency stakeholders. EUCOM has taken several steps to assign responsibilities for developing its posture plan and established an Executive Council to deliberate posture issues and work with the service component commands, but the process of developing a theater posture plan is relatively new and is not yet clearly defined and codified in command guidance. While EUCOM's steps to date have improved its ability to communicate with stakeholders and resolve conflicting views on posture issues, it has not been clearly defined and codified in command guidance. Furthermore, it does not provide for the analysis of costs and benefits, because the combatant commander has not been required to include such analysis in developing the theater posture plan. In addition, the Interagency Partnering Directorate--which was established by the EUCOM commander to improve interagency coordination within the command--has been included in the Executive Council, but EUCOM has not defined how interagency representatives can regularly participate in ongoing posture planning activities. As a result of these weaknesses in EUCOM's posture planning approach, the command is limited in its ability to consider and evaluate the cost of posture in conjunction with the strategic benefits it provides, and it may not be fully leveraging interagency perspectives as it defines future posture requirements. GAO recommends that DOD revise posture planning guidance to require comprehensive estimates of posture costs and provide for consistent analysis of posture alternatives, and that EUCOM clarify its posture planning process and methods to regularly obtain interagency perspectives. DOD agreed with GAO's recommendations and identified corrective actions, but additional steps are needed to fully address the recommendations. |
The Small Business Jobs Act of 2010 (the act) aims to address the ongoing effects of the 2007-2009 financial crisis on small businesses and stimulate job growth by establishing the SSBCI program, among other things. SSBCI is designed to strengthen state programs that support private financing to small businesses and small manufacturers that, according to Treasury, are not getting the loans or investments they need to expand and to create jobs. The act did not require a specific number of jobs to be created or retained as a result of SSBCI funds. The act appropriated $1.5 billion to be used by Treasury to provide direct support to states for use in programs designed to increase access to credit for small businesses. Using a formula contained in the act, Treasury calculated the amount of SSBCI funding for which each of the 50 states, as well as the District of Columbia, the Commonwealth of Puerto Rico, the Commonwealth of the Northern Mariana Islands, Guam, American Samoa, and the United States Virgin Islands were eligible to apply. This formula takes into account a state’s job losses in proportion to the aggregate job losses of all states. (See app. III for more information on available funding by location). In addition to states, the act granted permission to municipalities to apply directly for funding under SSBCI in the event that a state either failed to file a Notice of Intent to Apply for its allocation of program funds by November 26, 2010, or, after filing a Notice of Intent, failed to submit an application to Treasury by June 27, 2011. Treasury officials stated that municipalities granted permission to submit an application for program funds were generally subject to the same approval criteria and program requirements as states. Municipalities were eligible to apply for up to the total amount of their state’s SSBCI allocation, but the final approved amounts were to be apportioned based on their pro rata share by population of all applicants. Figure 1 provides a timeline of major SSBCI milestones. The act allowed Treasury to provide SSBCI funding for two state program categories: capital access programs (CAP) and other credit support programs (OCSP). A CAP is a loan portfolio insurance program wherein the borrower and lender, such as a small business owner and a bank, contribute to a reserve fund held by the lender. Under the act, approved CAPs are eligible to receive federal contributions to the reserve funds held by each participating financial institution in an amount equal to the total amount of the insurance premiums paid by the borrower and the lender on a loan-by-loan basis. Amounts in the lender’s reserve fund are then available to cover any losses incurred in its portfolio of CAP loans. For an SSBCI loan to be eligible for enrollment in a state’s approved CAP, the borrower must have 500 or fewer employees and the loan amount cannot exceed $5 million. In addition, the following types of OCSPs are eligible to receive SSBCI funds under the act: Collateral support programs: These programs supply cash collateral accounts to lenders to enhance the collateral coverage of borrowers. The accounts will cover all or a portion of the collateral shortfall identified by a lending institution. These programs can be designed to target certain regions or industries, such as equipment lending, in which a lender may be willing to fund at 80 percent loan-to-value, but a borrower may not be able to bridge the difference in cash at closing. Loan participation programs: These programs enable small businesses to obtain medium- to long-term financing, usually in the form of term loans, to help them expand their businesses. States may structure a loan participation program in two ways: (1) purchase transactions, also known as purchase participation, in which the state purchases a portion of a loan originated by a lender and (2) companion loans, also known as co-lending participation or parallel loans, in which a lender originates one loan and the state originates a second (usually subordinate) loan to the same borrower. This program enables the state to act as a lender, in partnership with a financial institution, to provide small business loans at attractive terms. Direct loan programs: Although Treasury does not consider these programs to be a separate SSBCI program type, it acknowledges that some states may identify programs that they plan to support with SSBCI funds as direct loan programs. The programs that some states label as direct loan programs are viewed by Treasury as co-lending programs categorized as loan participation programs, which have lending structures that are allowable under the statute. Loan guarantee programs: These programs enable small businesses to obtain term loans or lines of credit to help them grow and expand their businesses by providing a lender with the necessary security, in the form of a partial guarantee, for them to approve a loan or line of credit. In most cases, a state sets aside funds in a dedicated reserve or account to guarantee a specified percentage of each approved loan. Venture capital programs: These programs provide investment capital to create and grow start-ups and early-stage businesses, often in one of two forms: (1) a state-run venture capital fund (which may include other private investors) that invests directly in businesses or (2) a fund of funds, which is a fund that invests in other venture capital funds that in turn invest in individual businesses. Many factors, particularly resources and available talent, inform a state’s decision on which form to choose. For example, a state may choose to invest in a large venture fund that agrees to reinvest in that state an amount equal to that invested by the state, as opposed to trying to attract that same talent to a smaller fund capitalized with state money. Qualified loan or swap funding facilities: States may enter into qualifying loan or swap funding transactions under which SSBCI funds are pledged as collateral for private loans or credit lines. The private financing proceeds must, however, be used exclusively for the reserve or other accounts that back the credit support obligations of a borrowing CAP or OCSP. Presumably, fees paid by borrowers and lenders will provide a return to the providers of private capital. Other OCSPs: States were also able to submit an application to Treasury outlining their plans to support OCSPs that, though not able to be categorized in any of the above OCSP types, feature combinations of aspects of these eligible types. OCSPs approved to receive SSBCI funds are required to target borrowers with an average size of 500 or fewer employees and to target support towards loans with average principal amounts of $5 million or less. In addition, these programs cannot lend to borrowers with more than 750 employees or make any loans in excess of $20 million. In applying for funding, applicants had to demonstrate that their CAPs and OCSPs could satisfy SSBCI criteria. For example, applicant states had to demonstrate that all legal actions had been taken at the state level to accept SSBCI funds and implement the state programs. States were also required to demonstrate that the state possessed the operational capacity, skills, and financial and management capacity to meet the objectives set forth in the act. In addition, each applicant was required to demonstrate a “reasonable expectation” that its participating programs, taken together, would generate an amount of private financing and investment at least 10 times its SSBCI funding (that is, a leverage ratio of 10:1) by the program’s end in December 2016. Furthermore, each application had to include a report detailing how the state would use its SSBCI allocation to provide access to capital for small businesses in low- and moderate-income, minority, and other underserved communities, including women- and minority-owned small businesses. The act requires that each state receive its SSBCI funds in three disbursements of approximately one-third of its approved allocation. Prior to receipt of the second and third disbursements, a state must certify that it has expended, transferred, or obligated 80 percent or more of the previous disbursement to or for the account of one or more approved state programs. Treasury may terminate any portion of a state’s allocation that Treasury has not yet disbursed within 2 years of the date on which its SSBCI Allocation Agreement was signed. Treasury may also terminate, reduce, or withhold a state’s allocation at any time during the term of the Allocation Agreement upon an event of default under the agreement. Following the execution of the Allocation Agreement, states are required All to submit quarterly and annual reports on their use of SSBCI funds. SSBCI Allocation Agreements, the primary tool signed by Treasury and each participating state, which outline how recipients are to comply with program requirements, will expire on March 31, 2017. The program’s reporting requirements are detailed in section 4.8 of the SSBCI allocation agreement. The obligations of participating states and territories to perform and report on progress will expire as outlined in the terms of the agreement. Nearly all of the states eligible for SSBCI funds submitted applications to Treasury. Fifty-four of the 56 states and territories that were eligible to apply for program funds submitted an application prior to the June 27, 2011, deadline, although one state later withdrew its application. In total, states requested more than $1.4 billion in SSBCI funds—95 percent of the program’s appropriation—and only one applied for less than its maximum allocation. Following the application deadline for states, Treasury received five additional applications from municipalities in three states—Alaska, North Dakota, and Wyoming—by the September 27, 2011, deadline requesting a total of $39.5 million in program funds. Figure 2 illustrates the distribution of SSBCI funds applied for by states and territories. Participating states indicated that they are planning to support various new, existing, and dormant (that is, previously suspended) lending programs with their respective SSBCI allocations. According to our survey results, states are planning to support 153 different lending programs, 69 of which are new programs that were created to be supported by SSBCI funds (see fig. 3). Forty-one states indicated they are planning to support more than one program with their allocation. For example, Alabama plans to support a CAP, four loan participation programs, and a loan guarantee program, and New Jersey plans to support a loan participation program, four loan guarantee programs, five direct loan programs, and a venture capital program. According to our survey results, states are planning to support CAPs and all types of eligible OCSPs except loan and swap funding facilities (see fig. 4). Venture capital programs are to receive the largest amount of SSBCI funds of any program type. According to Treasury officials, states submitted their respective applications with plans for developing programs in response to unique gaps in local markets or the specific expertise of their staff. Consequently, there is variation in program design across states. For example, Treasury officials stated that Michigan plans to use its funds to support a collateral support program because of difficulties that manufacturing companies in the state were experiencing in obtaining credit. Specifically, Treasury officials noted that as these manufacturers’ real estate and equipment declined in value, they were facing difficulties in obtaining credit due to collateral shortfalls (see app. IV for more information on planned uses of funds by location). States indicated that they expect SSBCI funds to result in a total of $18.7 billion in new private financing and investment throughout the life of the program. In responding to our survey, officials from 39 of the states that applied for SSBCI funds indicated that they expect to achieve a private leverage ratio between 10:1 and 15:1, and 14 projected a ratio of 15:1 or greater. However, each participating state’s generation of an amount of private financing and investment at least 10 times its SSBCI allocation by December 2016 is not a requirement, and some states indicated that they believe reaching a 10:1 private leverage ratio could prove challenging. For example, officials from one state expressed some concern that the state’s final leverage ratio may ultimately fall short of the estimate included in its approved application because the state was creating a new program and, therefore, did not have prior experience operating a similar program. Treasury officials noted that a state’s mix of programs, as well as the design of each individual program, drives the leverage estimates. For example, Treasury officials stated that private leverage ratios for CAPs tend to be the highest among program types and are evident immediately because the program design is such that the SSBCI subsidy per loan is quite small and is not dependent on subsequent private financing. However, the officials noted that OCSPs tend to have lower leverage ratios initially but may see those grow in later years as program funds are recycled for additional lending over time. With the enactment of the Small Business Jobs Act of 2010 on September 27, 2010, Treasury was tasked with quickly starting up an SSBCI program office and developing processes and guidance to implement this new program. After accepting Notices of Intent to Apply from states and territories by the end of November 2010, Treasury issued an initial set of policy guidelines and application materials via its website on December 21, 2010. According to Treasury officials, Treasury received a few applications shortly thereafter and was able to review and approve them and to obtain signed Allocation Agreements with and distribute first installments of funds to two states in January 2011. In response to feedback from states, discussions with other federal agencies, such as the Small Business Administration, and current trends in the small business banking arena, Treasury determined that it needed to revise its guidelines and application paperwork to better articulate what documentation was required for both the application and review processes. As a result, Treasury issued revised guidance materials and Allocation Agreements for applicants in April 2011 as well as a reviewers’ manual for its review staff in May 2011. According to our survey of SSBCI applicants, five states submitted the final version of their application to Treasury before these documents were finalized. Treasury officials told us that although they took steps to help ensure consistent treatment of applicants, Treasury did not revisit previously approved applications once review procedures were finalized. Treasury officials said they were confident that no additional review was required, as those early applications were from states with well-established programs. However, as a result of the revisions to the Allocation Agreement made in April 2011, Treasury asked the two states that had signed the previous versions to sign an amended Allocation Agreement that incorporated the new terms. Some states reported that they delayed submitting their applications until Treasury’s final application guidance was issued. According to our survey results, 37 states did not submit their final applications for SSBCI funds until June 2011, the month that applications were due. Despite the delay in providing application guidance, applicants generally viewed Treasury officials as helpful throughout the application process—providing answers to most questions immediately and determining answers as soon as possible when not readily available. Treasury officials stated that they also hosted multiple webinars and conference calls to field questions about the application process that were highly attended by states and territories. In our review of the eight applications reviewed and approved before June 30, 2011, we found that Treasury considered each aspect of the application. Although only one of the applications we reviewed was processed under the revised application and review guidelines, we found that each application was subject to five stages of review: an initial review, a subsequent review by a quality assurance reviewer, review by the application review committee, a legal review, and final approval by the designated Treasury official. Our reviews of the applications and the experiences of the states suggest that applications were scrutinized in terms of their completeness as well as the eligibility of the programs for which states intended to use SSBCI funds. For example, Treasury reviewers noted that in one state’s application, the state proposed several modifications to its existing CAP, thereby bringing it under compliance with SSBCI requirements. Similarly, SSBCI applicants reported that Treasury scrutinized their applications. According to our survey results, 50 of the 54 applicants reported they were required to resubmit at least parts of their applications for further review after their original submissions. For example, one state noted that Treasury wanted significant changes in its application, mainly in the areas of internal controls, mix of programs, and contractor oversight. Another state noted that Treasury determined that the state failed to specify that it was to match the borrower and lender premium between 2 percent and 3.5 percent; Treasury officials asked the state to revise its application to reflect this information and submit an amended application. As required under the act, Treasury is distributing SSBCI funds to recipients in three installments. As of October 31, 2011, Treasury had provided first installments to 46 states and territories, totaling about $424 million. However, Treasury did not begin processing state requests for their second installment of funds until November 2011. According to Treasury officials, Treasury had previously not acted on these requests because they wanted to ensure that proper procedures were established to ensure all certifications made as part of the request were adequately substantiated. Specifically, they had to resolve how to determine whether 80 percent of a state’s initial disbursement of funds has been expended, transferred, or obligated as required under the act. Treasury finalized its disbursement procedures for second and third installments of SSBCI funds at the beginning of November 2011. According to Treasury officials, as of that date, no state had yet expended 80 percent of its initial disbursement to support loans or investment to small businesses. While Treasury was working to finalize these procedures, states were potentially delayed in receiving their remaining SSBCI funding. For example, officials from one state that we contacted told us they were ready for their second installment after their first installment was transferred to the accounts of their designated SSBCI lending programs, but they were told by Treasury officials that they would have to wait until the disbursement procedures were finalized. Consequently, the officials told us their state faced additional interest expenses as a result of the delay. Treasury is implementing a multi-step plan to monitor recipient compliance with SSBCI program requirements. These steps include (1) collecting and reviewing quarterly and annual reports, as well as quarterly use of funds certifications, from recipients, (2) evaluating the accuracy of recipient-level reporting on an annual basis by sampling transaction-level data, (3) monitoring recipient requests for second and third installments of SSBCI funds, and (4) contacting recipients on a quarterly basis to inquire as to their adherence with plans outlined in their respective SSBCI applications, as well as monitoring requirements. Treasury has developed a secure, online system for states to report on those data fields included in the Allocation Agreements signed by states, including (1) total amount of principal loaned and of that amount, the portion that is from nonprivate sources; (2) estimated number of jobs created or retained as a result of the loan; and (3) amount of additional private financing occurring after the loan closing. States are to provide these data to Treasury on an annual basis beginning in March 2012. Treasury officials told us they plan to sample states’ transaction-level data to help ensure the accuracy of state reporting. Specifically, the SSBCI compliance manager is to take samples of transaction-level data from all recipients in order to determine whether states are entering these data accurately, including verifying that transactions listed match the underlying loan or investment documents. Treasury officials noted that the system is to automatically flag any loans for which the data entered do not comply with program requirements. Treasury officials told us they have also assigned three relationship managers to serve as the primary Treasury contacts for the SSBCI program. These managers, who have each been assigned 15 to 20 recipients, are to hold quarterly phone conversations with recipients. During these calls, the managers are to ask a series of generic questions, as well as recipient-specific questions regarding plans the states described in their applications, such as hiring staff and monitoring the use of program funds. The Treasury Inspector General recently made recommendations to further enhance Treasury’s oversight of SSBCI recipients. In August 2011, the Inspector General issued a report describing the results of its review of SSBCI policy guidance and other key program documents, including allocation agreements.recommendations to improve Treasury’s compliance and oversight framework, including that Treasury’s guidance should clearly define the oversight obligations of recipients and specify minimum standards for determining whether recipients have fulfilled their oversight responsibilities. Treasury concurred with eight of the recommendations and has begun to take action to address them. Treasury disagreed with the Inspector General’s recommendation to make additional provisions for The report made nine states to certify their allocation agreements, stating that states certify that they are implementing their programs in compliance with SSBCI procedures as part of their quarterly reporting to Treasury. Treasury officials told us that they have not yet established performance measures for the SSBCI program. Although Treasury plans to rely primarily on the department’s overall performance measures in evaluating the SSBCI program, officials noted they are considering several draft performance measures to assess the efficiency of the program. Treasury officials described to us some of the potential measures they are considering, but we are not including them in this report because they have not yet been finalized. Treasury officials told us that they have not finalized the program’s performance measures because they have been focused on starting up the program quickly to meet statutorily required deadlines. Furthermore, officials noted that because SSBCI is a multilayered program that is implemented at the state level and dependent upon private sector entities, Treasury’s ability to influence program outcomes will be limited. Therefore, Treasury officials have been trying to develop measures that focus on the aspects of the program under Treasury’s control. According to Treasury officials they do not have a time frame for fully developing and finalizing SSBCI-specific performance measures. The potential performance measures described by Treasury do not currently include measures related to the number of jobs created or retained as a result of the SSBCI program. As required in their allocation agreements with Treasury, states are to report information on estimated jobs resulting from SSBCI programs on a per loan or investment basis. According to Treasury officials, gathering this information from the states serves two purposes: (1) it allows Treasury to track the progress of the states against the anticipated benefits articulated for their programs in their SSBCI applications and (2) it provides Treasury with a potential data point that may be useful when measuring overall program performance over time. However, Treasury’s ability to use this information moving forward could be limited, as the jobs data will be based on estimates and not actual jobs. In particular, as part of the SSBCI loan and investment application process, borrowers and investors are required to provide in their application paperwork estimates of the number of jobs to be created and retained as a result of participating in SSBCI programs. States then provide these estimates in their annual reports to Treasury. However, the states are not required to validate these jobs estimates, and they are not required to follow up with borrowers and investors to determine whether the actual number of jobs they were able to create or retain matched their original estimates. According to one lending official we spoke with, validating these estimates would be difficult and lenders could be discouraged from participating in the SSBCI program if they were required to track actual jobs created and retained. Concerned about the burden that reporting on actual jobs created and retained would place on the small businesses receiving SSBCI funds, Treasury officials told us that they elected to capture instead estimated jobs data at the time of the closing of the loan or investment. Treasury officials noted they are currently consulting with officials from the Small Business Administration to learn what methods that agency uses in measuring jobs using estimated data. The importance of performance measures for gauging the progress of programs and projects is well recognized. Measuring performance allows organizations to track the progress they are making toward their goals and gives managers crucial information on which to base their organizational and management decisions. Leading organizations recognize that performance measures can create powerful incentives to influence organizational and individual behavior. In addition, the Government Performance and Results Act of 1993 (GPRA) incorporates performance measurement as one of its most important features. Under GPRA, executive branch agencies are required to develop annual performance plans that use performance measurement to reinforce the connection between the long-term strategic goals outlined in their strategic plans and the day-to-day activities of their managers and staff. The Office of Management and Budget (OMB) has also directed agencies to define and select meaningful outcome-based performance measures that indicate the intended result of carrying out a program or activity. Additionally, we have previously reported that aligning performance metrics with goals can help to measure progress toward those goals, emphasizing the quality of the services an agency provides or the resulting benefits to users. We have also previously identified criteria to evaluate an agency’s performance measures. While GPRA focuses on the agency level, performance measures are important management tools for all levels of an agency—such as the bureau, program, project, or activity level—and these criteria are applicable at those levels as well. Among other criteria, we have identified nine key attributes of successful performance measures. These attributes include the following: (1) Linkage. Measure is aligned with division- and agency-wide goals and mission and clearly communicated throughout the organization. (2) Clarity. Measure is clearly stated, and the name and definition are consistent with the methodology used to calculate it. (3) Measurable target. Measure has a numerical goal. (4) Objectivity. Measure is reasonably free from significant bias or manipulation. (5) Reliability. Measure produces the same result under similar conditions. (6) Core program activities. Measures cover the activities that an entity is expected to perform to support the intent of the program. (7) Limited overlap. Measure should provide new information beyond that provided by other measures. (8) Balance. Balance exists when a suite of measures ensures that an organization’s various priorities are covered. (9) Governmentwide priorities. Each measure should cover a priority such as quality, timeliness, and cost of service. Given the preliminary nature of Treasury’s potential performance measures, assessing whether the measures will reflect the attributes of successful performance measures would be premature. Nevertheless, considering these attributes as it works to finalize SSBCI-specific performance measures could help Treasury to develop robust measures. Until such measures are developed and implemented, Treasury will not be able to determine whether the program is achieving its goals. In response to SSBCI’s short time frame, Treasury was able to design, implement, and execute an application process for the program in a matter of months. Appropriately, Treasury’s early efforts were focused on establishing the application process and the process for disbursing initial installments of funds to recipients as quickly as possible. Treasury is still in the process of developing performance measures for the SSBCI program. Measuring performance allows organizations to track progress toward their goals and gives managers crucial information on which to base decisions. At the program level, agencies can create a set of performance measures that addresses important dimensions of program performance and balances competing priorities. Performance measures that successfully address important and varied aspects of program performance are key elements of an orientation toward results. Effective performance measures can provide a balanced perspective on the intended performance of a program’s multiple priorities. While Treasury is considering potential draft performance measures, it has not fully developed or finalized a set of measures for the SSBCI program. Until such measures are developed and implemented, Treasury will not be in a position to determine whether the SSBCI program is effective in achieving its goals. We are making one recommendation to Treasury to improve its implementation and oversight of the SSBCI program as follows: To help ensure that the performance measures for the SSBCI program are as robust and meaningful as possible, we recommend that the Secretary of the Treasury direct the SSBCI Program Manager to consider key attributes of successful performance measures as the program’s measures are developed and finalized. We provided a draft of this report to Treasury for review and comment. Treasury provided written comments that we have reprinted in appendix V. Treasury also provided technical comments, which we have incorporated, as appropriate. In their written comments, Treasury agreed with our recommendation. Treasury noted that it will consider the key attributes of successful performance measures as it works to finalize measures for the SSBCI program. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Treasury, and other interested parties. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at clowersa@gao.gov or (202) 512-8678. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To determine which states applied for and received State Small Business Credit Initiative (SSBCI) funds and the planned uses of the funds, we developed a Web-based questionnaire to collect information from the 54 states and territories that filed a Notice of Intent to Apply for SSBCI funds with the Department of the Treasury (Treasury) by the November 26, 2010 deadline. The questionnaire included questions on the timing of applications for SSBCI funds, the receipt of funds to date, the intended uses of funds, and the potential impacts of program funds. See appendix II for a copy of the questionnaire. To minimize errors arising from differences in how questions might be interpreted and to reduce variability in responses that should be qualitatively the same, we conducted pretests with officials in three states, both in person and over the telephone. To help ensure that we obtained a variety of perspectives on our questionnaire, we selected officials from states planning to support various types of programs with SSBCI funds. Based on feedback from these pretests, we revised the questionnaire in order to improve response quality. For instance, in response to one state official’s comment that it would be difficult for respondents to answer with confidence how many capital access programs (CAP) and other credit support programs (OCSP) have recently been in operation across all municipalities in a state, we removed the historical and specific program budget questions and clarified our focus on the planned uses of SSBCI funds. We conducted two additional pretests with other state officials to ensure that the updated questionnaire was understandable. After completing the pretests, we administered the survey. On August 4, 2011, we began sending e-mail announcements of the questionnaire to the state and territory officials that had been identified as points of contact in a list provided to us by Treasury, notifying them that our online questionnaire would be activated in approximately 1 week. On August 15, 2011, we sent a second e-mail message to officials in which we informed them that the questionnaire was available online and provided them with unique passwords and usernames. On August 26, 2011, we began making telephone calls to officials and sent them follow-up e-mail messages, as necessary, to ensure their participation as well as to clarify and gain a contextual understanding of their responses. By September 14, 2011, we had received completed questionnaires from 54 states and territories, for a 100 percent response rate. We used standard descriptive statistics to analyze responses to the questionnaire. Because this was not a sample survey, there are no sampling errors. To minimize other types of errors, commonly referred to as nonsampling errors, and to enhance data quality, we employed recognized survey design practices in the development of the questionnaire and in the collection, processing, and analysis of the survey data. For instance, as previously mentioned, we pretested the questionnaire with state officials to minimize errors arising from differences in how questions might be interpreted and to reduce variability in responses that should be qualitatively the same. In addition, during survey development, we reviewed the survey to ensure the ordering of survey sections was appropriate and that the questions within each section were clearly stated and easy to comprehend. We also received feedback from survey experts who we asked to review the survey instrument. To reduce nonresponse, another source of nonsampling error, we sent out e-mail reminder messages to encourage officials to complete the survey. In reviewing the survey data, we performed automated checks to identify inappropriate answers. We further reviewed the data for missing or ambiguous responses and followed up with respondents when necessary to clarify their responses. On the basis of our application of recognized survey design practices and follow-up procedures, we determined that the data were of sufficient quality for our purposes. In addition to the survey, we conducted interviews with Treasury officials, as well as selected state officials and financial institutions within those states either via teleconference or site visits to collect documentation that informed our understanding of states’ planned uses of SSBCI funds. We limited our selection of states to interview to those states whose SSBCI applications had been reviewed, approved, and for which the applicant had signed an allocation agreement by June 30, 2011: California, Hawaii, Indiana, Kansas, Maryland, Missouri, North Carolina, and Vermont. To evaluate Treasury’s implementation of the SSBCI program, we compared and contrasted Treasury’s SSBCI procedures and planned control activities with GAO’s internal control standards, including Internal Control in the Federal Government. We interviewed Treasury officials about the types of training it provided its staff to help ensure compliance with its procedures. We also utilized data obtained through our questionnaire to identify the dates on which states submitted their SSBCI applications and whether Treasury required resubmission. Additionally, we reviewed a nonprobability sample of SSBCI applications consisting of all eight states that had signed an SSBCI allocation agreement by June 30, 2011, to determine whether all aspects of these states’ applications were considered. We assessed whether Treasury followed its procedures and appropriately documented its decisions by analyzing the documentation of the application reviews. Because we used a nongeneralizable sample to select the applications to review, our findings cannot be used to make inferences about SSBCI applications of states that signed allocation agreements after June 30, 2011. However, we determined that the sample would be useful in providing illustrative examples on procedures and documentation practices applied by Treasury. Furthermore, we conducted interviews with Treasury officials about the type of testing the agency plans to perform of its controls to ensure compliance with SSBCI procedures, lessons learned about the review process, how they addressed problems, and their plans to follow up with states to ensure that SSBCI funds are used for the intended purposes outlined in approved applications for program funds. To review Treasury’s efforts to measure whether the SSBCI program achieves its goals of increasing small business investment and creating jobs, we discussed with Treasury their proposed performance metrics for the SSBCI program. We also interviewed Treasury officials, as well as officials from the eight states that had signed a SSBCI allocation agreement with Treasury by June 30, 2011, to collect documentation that was used to inform our understanding of SSBCI program performance and Treasury’s metrics. We conducted this performance audit from February 2011 to December 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 1 below contains the amounts of SSBCI funds that have been applied for, approved, and disbursed as of October 31, 2011. This information was provided by state and territory officials who responded to a GAO survey between August 15 and September 14, 2011 and by the U.S. Treasury on October 31, 2011. Table 2 below contains information on states and territories’ plans for the distribution of SSBCI funds among eligible program types, provided by officials between August 15 and September 14, 2011. In addition to the individual named above, Paul Schmidt, Assistant Director; Pamela Davidson; Jill Lacey; Marc Molino; Patricia Moye; Deena Richart; Christine San; Jennifer Schwartz; and Chad Williams made key contributions to this report. | Congress enacted the Small Business Jobs Act of 2010 in September 2010 in response to concerns that small businesses have been unable to access capital that would allow them to create jobs. Among other things, the act aims to stimulate job growth by establishing the $1.5 billion State Small Business Credit Initiative (SSBCI) within the Department of the Treasury (Treasury) to strengthen state and territory (state) programs that support lending to small businesses and small manufacturers. Participating states are expected to leverage the SSBCI funds to generate an amount of private financing and investment at least 10 times the amount of their SSBCI funds (that is, a leverage ratio of 10:1). The act also requires GAO to audit SSBCI annually. Accordingly, this report examines (1) which states applied for SSBCI funds and the planned uses of those funds; (2) Treasury's implementation of SSBCI; and (3) Treasury's efforts to measure whether SSBCI achieves its goals. GAO surveyed state SSBCI applicants (for a 100 percent response rate), analyzed data from Treasury case files, and interviewed officials from Treasury and eight participating states. Fifty-four of the 56 eligible states and territories submitted applications requesting a total of about $1.4 billion in SSBCI funds. According to GAO's survey of SSBCI applicants, states plan to support 153 lending programs nationwide with SSBCI funds, 69 of which are new programs being created because of the SSBCI program. These lending programs include a variety of capital access programs and other credit support programs, with venture capital programs receiving the largest amount of funds among eligible program types. SSBCI applicants anticipate that their SSBCI funds will allow them to leverage up to $18.7 billion in new private financing and investment. Some applicants, however, expressed concern that achieving a 10:1 leverage ratio of private financing and investment to program funds could ultimately prove challenging, especially for states creating new programs. Treasury's procedures for SSBCI have evolved throughout its implementation of the program. Treasury began approving applications for SSBCI funds in January 2011 in accordance with guidance it issued in December 2010. However, Treasury did not finalize its application guidance and review procedures until April and May 2011, respectively. Some states indicated they delayed submitting their applications until Treasury's guidance was finalized, with 37 states not submitting an application until June 2011--the deadline for applications. In addition, Treasury did not finalize its procedures for disbursing subsequent installments of funds to states until November 2011, citing potential different legal interpretations of the act's disbursement requirements as the cause for the delay. Treasury is implementing a plan to monitor states' compliance with program requirements, which will include sampling transaction-level data to evaluate the accuracy of the states' annual reports. The Treasury Inspector General made recommendations in August 2011 to improve the tools Treasury will use to monitor state compliance. Treasury has not yet established performance measures for the SSBCI program. Treasury officials noted they are considering several draft performance measures to assess the efficiency of the program. However, Treasury has not finalized its plans for measuring the SSBCI program's performance. GAO and others have recognized the importance of using performance measures to gauge the progress of programs. GAO has also identified key attributes of successful performance measures. Given the preliminary nature of Treasury's potential performance measures, assessing whether the measures reflect the attributes of successful performance measures is premature. Nonetheless, considering these attributes as it works to finalize the SSBCI-specific performance measures could help Treasury to develop robust measures. Until such measures are developed and implemented, Treasury will not be able to determine whether the program is achieving its goals. GAO recommends that Treasury direct the SSBCI Program Manager to consider key attributes of successful performance measures when developing and finalizing SSBCI-specific performance measures. Treasury concurred with the report's recommendation. |
JPL is NASA’s only Federally Funded Research and Development Center (FFRDC) and is operated under contract by Caltech. JPL is NASA’s field installation for solar system exploration and is a major operating division of Caltech. Together, these overlapping roles contribute to unique JPL management and oversight challenges. FFRDCs are operated under agreements funded by sponsoring federal agencies to provide for research or development needs that cannot readily be met by the agencies or contractors. JPL work is primarily funded by NASA; however, other sponsors can fund JPL efforts under reimbursable arrangements with NASA. JPL’s total 1994 business base was just over $1 billion. JPL receives work projects directly from NASA program offices. It can also submit proposals to, or respond to non-competitive requests from, other work sponsors using up to 25 percent of the JPL direct workforce. Both the NASA-directed work and the non-NASA work must be determined to be appropriate for JPL to perform based on the scope of the sponsoring contract. Caltech has operated JPL for NASA since NASA became an agency in 1958 and conducted work at the same site for other federal entities as early as the 1930s. The current contract is in effect from September 20, 1993, to September 30, 1998. It provides a framework of procedures, regulations, and other guidance for funding specific tasks. Rather than signing separate contracts for individual work projects, funding for JPL is provided under “task orders” for specific work. Cost allowability is governed by the contract and by the Office of Management and Budget’s (OMB) Circular A-21, “Cost Principles for Educational Institutions.” In our first report to the Committee, we discussed JPL’s fixed fee, selected cost controls, scope of work, food and beverage charges, and tuition payments for dependents. Our second report discussed the management of NASA equipment by JPL, particularly loaning it to employees and controlling it at Caltech’s campus. Changes have been made to address the concerns raised in our July 1993 report. First, the fixed fee under NASA’s previous contract with Caltech was replaced with a fee structure that bases two-thirds of the fee award on NASA’s assessment of JPL’s performance. Also, new reporting and review procedures could provide control over selected costs comparable to that at commercial contractors. Similarly, although the scope of work was not substantively modified for NASA tasks, it was narrowed and oversight was increased for non-NASA work performed by JPL. Finally, the total number of deviations from the Federal Acquisition Regulation (FAR) in the contract was reduced. In addition, our concern regarding the tuition asistance benefit will be addressed as part of NASA’s recent request to DCAA for a review of JPL’s compensation package. The previous contract provided Caltech with a fee range of between $11.4 million and $15.4 million, based solely on the volume of work conducted at JPL. This arrangement was contrary to NASA’s goal of considering performance in awarding fee to contractors and counter to the agency’s policy of not paying fee or profit on contracts with universities. We recommended that NASA authorize a deviation from its policy against paying fee to educational institutions only if its purpose and amount was adequately justified and, if a fee was authorized, to base the amount on performance. For the new contract, NASA approved a policy deviation allowing fee payment to a university and created a new fee structure. Under the contract’s “Management Performance Incentive Plan,” Caltech is paid $6 million plus an additional performance-based amount of up to $12 million. The incentive criteria for the performance-based fee is specified in the contract, with assigned weights of 65 for technical performance, 25 for institutional management, and 10 for outreach programs. Two evaluation boards and an award official will determine the fee amount, based on ratings by individuals familiar with JPL’s work for NASA and non-NASA sponsors. NASA’s award decision is not subject to the contract’s dispute clause and no incentive fee is paid if performance is less than satisfactory. NASA awarded a total fee of $16.5 million for 1994. NASA may also indicate emphasis areas prior to each rating period. For 1994, no areas were emphasized due to extended contract negotiations. Eight areas have been identified for 1995—including cost containment, improved compliance with JPL policies, increased cultural and gender diversity in senior management, and effective social and educational outreach programs consistent with overall NASA and federal government initiatives in these areas. According to NASA officials, efforts pursued under any emphasis area must still fall within the contract’s scope of work. We noted in our July 1993 report that Caltech received a higher fee than any of the other large FFRDCs administered by educational institutions that receive fees. Based on past ratings, Caltech is unlikely to receive less fee under the new fee structure. For example, Caltech could be scored one point above a poor/unsatisfactory rating—61 out of 100—and still receive an incentive payment of $7.3 million on top of the $6 million fixed fee. This is more than the $13.1 million fee paid for the last year of the previous contract. Justifying and paying fee is an issue for all FFRDCs, not just JPL. NASA officials believe that JPL is the only FFRDC receiving a fee linked to performance and intend the $12 million performance-based fee as a strong incentive. If the incentive award fee concept is successful at JPL, performance-based fees could be considered for other FFRDCs that receive fees. Its success will depend largely on NASA applying a rigorous scoring system to help ensure a fair evaluation clearly reflecting performance. In our July 1993 report we noted that selected costs, called “burden” costs at JPL, were not being thoroughly reviewed by NASA. The current contract identifies DCAA as the responsible organization for reviewing JPL’s annual submission of such costs and includes new reporting requirements for them as proposed by Caltech. According to NASA, these new reporting requirements improve the visibility of such costs. DCAA also believes the current contract language and the new reporting requirements could improve NASA’s control of these costs. The key is the “auditability” of JPL’s cost submission and the supporting documentation. DCAA has asked for specific cost data similar to that it requests from commercial contractors. JPL officials intend to provide the requested data. “Conducting (i) a program of supporting research and (ii) a program of advanced technical development, designed to make contributions to space science, space transportation, practical applications, technology and exploration.” The basic broad content and lack of specificity in the prior contract remains in the current contract for NASA work. However, there was a change in the scope of non-NASA work. The contract previously specified that tasks undertaken for non-NASA agencies at JPL would “focus on” efforts applying JPL developed technologies. The new contract replaces the words “focus on” with “be confined to.” However, the contract guidelines for non-NASA work remain broad. Therefore, the NASA Management Office at JPL—which reviews and approves non-NASA task orders—becomes the key control for ensuring the unique contribution of JPL to the work. Beginning last year, that office increased its oversight of the appropriateness of non-NASA task orders, particularly for those involving computer purchases. The Management Office has delayed approving tasks until further justifications have been provided and has asked JPL to notify potential non-NASA task sponsors early in the process of the need to document why JPL should do the work. NASA has reduced the number of contract deviations from standard clauses established in the FAR and NASA’s FAR supplements. The number of FAR and NASA FAR deviations that were in the old research and development contract were decreased from 22 to 15, and the total number of standard clauses incorporated in the contract have increased from 74 to 98. For example, the standard “Payment of Overtime Premiums” clause was restored. As a result, a request for overtime premiums must document factors associated with the request, the effects of denial, and why other options would not be appropriate. NASA’s current contract with Caltech contains two new deviations from prescribed cost allowability provisions. The FAR defers to OMB’s Circular A-21, “Cost Principles for Educational Institutions,” to prescribe which costs incurred by educational institutions may be recovered under government contracts and which may not. Under the Circular, costs incurred under an employee lawsuit under Section 2 of the Major Fraud Act of 1988, including amounts paid to the employee, are unallowable. The Circular also provides that, in general, fines and penalties resulting from violations of the law are unallowable costs. NASA’s contract with Caltech, however, provides that if Caltech litigates a third party suit and is found to have violated federal law, Caltech’s legal and judgment costs will be allowed if Caltech can demonstrate that it had a reasonable expectation of prevailing on the merits. A-21 also limits payment for advertising and public relations costs. The contract provides that this A-21 restriction does not apply to JPL disseminating public information on NASA programs or activities. For example, costs associated with JPL public events marking NASA accomplishments or the printing of program-related materials are expressly allowable. Similarly, the contract specifically allows costs for promoting technology transfer to the private sector—a NASA mandate—stating these costs are not a “cost of selling and marketing,” which A-21 does not allow. Dependents of JPL employees accepted at Caltech attend the university tuition free, with the annual per student tuition—$15,900 in fiscal year 1994—charged to NASA. In addition, approximately 150 senior JPL employees are eligible for tuition assistance of up to half Caltech’s tuition when their dependents attend other universities. JPL considers this Caltech employee benefit a key element for recruiting exceptional employees. In 1994, 39 dependents of 30 senior employees received assistance for attending other schools. As our July 1993 report stated, dependent tuition is an allowable cost under Circular A-21, if the benefit is granted according to university policy. NASA’s 1980 approval of tuition reimbursement for JPL dependents attending other than Caltech was conditioned on Caltech limiting cost increases. However, as figure 1 shows, JPL’s employee dependent tuition costs have continued to increase significantly in recent years. We recommended that NASA decide whether and to what extent it should continue paying dependent tuition support. The NASA Administrator responded that the tuition benefit is part of Caltech’s general compensation and benefit plan and that it would be reviewed as part of a comprehensive JPL compensation review that NASA would conduct during fiscal year 1994. No review was conducted that year but, in September 1994, NASA requested DCAA perform a comprehensive review of JPL’s compensation system. The request noted that various parts of the system had been reviewed by DCAA over the last 2 years and asked that those results be incorporated into the comprehensive review, together with additional areas of compensation that had not been audited. JPL’s dependent tuition assistance program was specifically targeted for review. NASA and JPL responded quickly to our concerns and recommendations to rectify internal control weaknesses in the management of NASA equipment. Policies and procedures for employee equipment loans and the tracking of equipment at Caltech have been improved. Also, changes in JPL’s policies on charging NASA for food and beverages have substantially reduced those costs. In our April 1994 report we recognized that employee home use of equipment can be valuable, but noted that the frequency, duration and growth of equipment on loan called for review. We recommended that NASA look at its employee loan policy to limit the type of equipment and conditions for borrowing and that JPL’s policy be made consistent with NASA’s policy. Both NASA and JPL have revised their policies. JPL issued new guidance on June 9, 1994, that severely restricts off-site use of property. They also initiated a recall of equipment not meeting the new conditions. Under the new criteria, equipment loans, including overnight use of a portable computer, is not allowed without meeting a critical need test and obtaining the approval of a division manager. The number of equipment items on loan dropped 88 percent from 4035 (valued at $7.6 million) in September 1993 to 451 items (valued at $760,000) by October 1994. NASA’s property manager at JPL believes there will be a reduction in the future procurement of new computer equipment, in part as a result of the returned equipment being available for use at JPL. NASA’s loan policy, issued July 18, 1994, allows for mission-essential home loans of 30 days, or up to 180 days after signing a loan agreement in which the employee assumes responsibility for the equipment. Both loans can be renewed once and require approval by the property custodian, immediate supervisor, and the division director/chief. Loan renewal requests beyond 360 days need approval by the Center Director or Director of Operations for NASA headquarters. We also recommended that NASA require JPL to review and improve its property control system, and evaluate and revise its procedures for keeping track of inventory, including equipment located at Caltech. In response, JPL formed advisory groups to study and address property control issues, and established a deadline of December 31, 1994, for the groups’ recommendations to be implemented. NASA requested JPL conduct a wall-to-wall property inventory, which is now underway. All equipment has been scanned, both at JPL and Caltech, and NASA identification tags have been placed on all JPL equipment at Caltech. NASA’s Property Manager noted losses are much lower at this point in the inventory than they were when property was last inventoried in 1992. Then, 12,000 items were not located after initial scanning, compared to 3,593 this time. The reasons for the differences will not be known until the 1994 inventory is complete. Our final recommendation—that JPL identify and dispose of obsolete or excess equipment—will be addressed by one of the JPL advisory groups in coordination with the NASA Management Office. As part of NASA’s review of the JPL property system, NASA asked JPL to change its procedures to speed disposal of equipment purchased for reimbursable sponsors. We reported in July 1993 that food and beverages charged to the NASA contract had been growing rapidly and internal controls were weak. We specifically questioned the allowability of “working meals” and recommended that they be identified in the new contract as unallowable costs. NASA agreed that the costs were unallowable but decided against specific contract language due to new JPL policies severely limiting food and beverage costs. According to the new JPL policies, working meals are not allowable contract charges. Restrictions were also placed on charges for cafeteria services, which totaled almost $145,000 for fiscal years 1991 and 1992. The new policy limits cafeteria charges to beverages, and only for meetings over 3 hours that include non-JPL employees. The new policies strictly limit chargeable meals and refreshments at JPL functions, and prohibit these charges for government employees. Food and beverage costs for the first 6 months under the new policies were $35,000. Almost three times that amount was charged to NASA for the 6 months prior to the policy change. NASA requested a DCAA audit of all JPL food and beverage costs for fiscal years 1991 and 1992. The resulting report questions almost $329,000 of the $406,650 in estimated costs for that period. In response, Caltech withdrew the questioned amount from NASA contract charges, stating that it did this so that the government would not be at a disadvantage while JPL evaluates the questioned costs. As of September 1994, none of the costs have been resubmitted to NASA. Subsequently, NASA requested an audit of food and beverage costs for fiscal years 1989, 1990, and 1993. This report is expected to be completed by January 1995. The flexibility in the JPL contract places increased importance on oversight by the NASA Management Office. Improved coordination of audit resources could complement that oversight. JPL’s multiple roles—NASA center, a division of a university, a contractor, and an FFRDC—subject it to oversight by a variety of audit organizations. Two of these maintain offices at JPL—the NASA Office of the Inspector General and DCAA. Under authority of the Inspector General Act, NASA’s Inspector General is responsible for providing an effective audit program to review NASA activities. DCAA conducts contract audits of JPL and other NASA contractors, as requested by NASA. As shown in table 1, operational audit oversight is provided by these two audit entities, as well as Caltech’s internal audit organization, on an ongoing basis. Other audit groups are also engaged at JPL periodically or are peripherally involved with JPL through their Caltech affiliation. We perform periodic audits, usually in response to requests from congressional committees. Other organizations, such as the Small Business Administration and the Army Corp of Engineers, conduct special reviews. Further, a public accounting firm annually audits Caltech’s financial statement and DCAA is the cognizant audit agency for Caltech’s campus activities. Recently, NASA has taken a more active role in coordinating audit efforts. Coordination between the Inspector General staff and DCAA had previously been limited. Over the last year, the NASA Management Office has increased its role in coordinating audit efforts by sponsoring meetings between the Inspector General and DCAA staffs to reduce duplication and request that its needs be incorporated into their audit plans. That office also arranged for Caltech’s internal audit staff to participate in audit coordination meetings with the Inspector General and DCAA in October 1994. The scope of this review was limited to following up on those issues addressed in our July 1993 and April 1994 reports. To analyze how NASA handled contractual and oversight concerns, we compared its current contract with Caltech to the previous one and to NASA’s Request for Proposal for the current contract. We also reviewed the contract’s negotiation files, applicable FAR and NASA FAR supplement provisions, award fee training materials, and JPL’s policies on meals and equipment. We collected and summarized cost information on dependent tuition from JPL’s financial accounting division and the JPL Director’s office. We also reviewed selected compensation reports from 1993 and 1994. We interviewed NASA Management Office officials and staff, NASA General Counsel personnel, and JPL officials responsible for meal accounting and equipment policies. We also held discussions with DCAA representatives at JPL and Caltech, Inspector General officials at JPL and NASA headquarters, and the Caltech Internal Audit Director. We conducted our work from April 1994 to October 1994, in accordance with generally accepted government auditing standards. As requested, we did not obtain agency comments on a draft of this report. However, we discussed the information in the report with both NASA and JPL officials and considered their comments in preparing it. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to other appropriate congressional committees, the NASA Administrator, and Director of OMB. We will also provide copies to others upon request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. The major contributors to this report were Allan Roberts, Assistant Director; Frank Degnan, Assistant Director; and Monica Kelly, Evaluator-in-Charge. Donna M. Heivilin, Director Defense Management and NASA Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the contract between the National Aeronautics and Space Administration's (NASA) and the California Institute of Technology (Caltech) for the operation of the Jet Propulsion Laboratory (JPL) and analyzed modifications to it. GAO found that: (1) NASA has improved the JPL contract in several areas including award fee, selected cost controls, scope of work, and number of contract deviations; (2) although NASA has not yet reviewed the reasonableness of paying the college tuition of JPL employees' dependents, it requested that the Defense Contract Audit Agency perform a comprehensive review of the JPL compensation package and the dependent tuition assistance benefit; (3) the success of the contractual and oversight changes at JPL will depend on effective implementation; and (4) NASA needs to better coordinate its audit resources to meet the demands of its oversight responsibilities. |
Federal, state, and local government agencies have differing roles with regard to public health emergency preparedness and response. The federal government conducts a variety of activities, including developing interagency response plans, increasing state and local response capabilities, developing and deploying federal response teams, increasing the availability of medical treatments, participating in and sponsoring exercises, planning for victim aid, and providing support in times of disaster and during special events such as the Olympic games. One of its main functions is to provide support for the primary responders at the state and local level, including emergency medical service personnel, public health officials, doctors, and nurses. This support is critical because the burden of response falls initially on state and local emergency response agencies. The President’s proposal transfers control over many of the programs that provide preparedness and response support for the state and local governments to a new Department of Homeland Security. Among other changes, the proposed bill transfers HHS’s Office of the Assistant Secretary for Public Health Emergency Preparedness to the new department. Included in this transfer is the Office of Emergency Preparedness (OEP), which currently leads the National Disaster Medical System (NDMS) in conjunction with several other agencies and the Metropolitan Medical Response System (MMRS). The Strategic National Stockpile, currently administered by the Centers for Disease Control and Prevention (CDC), would also be transferred, although the Secretary of Health and Human Services would still manage the stockpile and continue to determine its contents. Under the President’s proposal, the new department would also be responsible for all current HHS public health emergency preparedness activities carried out to assist state and local governments or private organizations to plan, prepare for, prevent, identify, and respond to biological, chemical, radiological, and nuclear events and public health emergencies. Although not specifically named in the proposal, this would include CDC’s Bioterrorism Preparedness and Response program and the Health Resources and Services Administration’s (HRSA) Bioterrorism Hospital Preparedness Program. These programs provide grants to states and cities to develop plans and build capacity for communication, disease surveillance, epidemiology, hospital planning, laboratory analysis, and other basic public health functions. Except as directed by the President, the Secretary of Homeland Security would carry out these activities through HHS under agreements to be negotiated with the Secretary of HHS. Further, the Secretary of Homeland Security would be authorized to set the priorities for these preparedness and response activities. The consolidation of federal assets and resources in the President’s proposed legislation has the potential to improve coordination of public health preparedness and response activities at the federal, state, and local levels. Our past work has detailed a lack of coordination in the programs that house these activities, which are currently dispersed across numerous federal agencies. In addition, we have discussed the need for an institutionalized responsibility for homeland security in federal statute.The proposal provides the potential to consolidate programs, thereby reducing the number of points of contact with which state and local officials have to contend, but coordination would still be required with multiple agencies across departments. Many of the agencies involved in these programs have differing perspectives and priorities, and the proposal does not sufficiently clarify the lines of authority of different parties in the event of an emergency, such as between the Federal Bureau of Investigation (FBI) and public health officials investigating a suspected bioterrorist incident. Let me provide you more details. We have reported that many state and local officials have expressed concerns about the coordination of federal public health preparedness and response efforts. Officials from state public health agencies and state emergency management agencies have told us that federal programs for improving state and local preparedness are not carefully coordinated or well organized. For example, federal programs managed by the Federal Emergency Management Agency (FEMA), Department of Justice (DOJ), and OEP and CDC all currently provide funds to assist state and local governments. Each program conditions the receipt of funds on the completion of a plan, but officials have told us that the preparation of multiple, generally overlapping plans can be an inefficient process. In addition, state and local officials told us that having so many federal entities involved in preparedness and response has led to confusion, making it difficult for them to identify available federal preparedness resources and effectively partner with the federal government. The proposed transfer of numerous federal response teams and assets to the new department would enhance efficiency and accountability for these activities. This would involve a number of separate federal programs for emergency preparedness and response, including FEMA; certain units of DOJ; and HHS’s Office of the Assistant Secretary for Public Health Emergency Preparedness, including OEP and its NDMS and MMRS programs, along with the Strategic National Stockpile. In our previous work, we found that in spite of numerous efforts to improve coordination of the separate federal programs, problems remained, and we recommended consolidating the FEMA and DOJ programs to improve the coordination. The proposal places these programs under the control of one person, the Under Secretary for Emergency Preparedness and Response, who could potentially reduce overlap and improve coordination. This change would make one individual accountable for these programs and would provide a central source for federal assistance. The proposed transfer of MMRS, a collection of local response systems funded by HHS in metropolitan areas, has the potential to enhance its communication and coordination. Officials from one state told us that their state has MMRSs in multiple cities but there is no mechanism in place to allow communication and coordination among them. Although the proposed department has the potential to facilitate the coordination of this program, this example highlights the need for greater regional coordination, an issue on which the proposal is silent. Because the new department would not include all agencies having public health responsibilities related to homeland security, coordination across departments would still be required for some programs. For example, NDMS functions as a partnership among HHS, the Department of Defense (DOD), the Department of Veterans Affairs (VA), FEMA, state and local governments, and the private sector. However, as the DOD and VA programs are not included in the proposal, only some of these federal organizations would be brought under the umbrella of the Department of Homeland Security. Similarly, the Strategic National Stockpile currently involves multiple agencies. It is administered by CDC, which contracts with VA to purchase and store pharmaceutical and medical supplies that could be used in the event of a terrorist incident. Recently expanded and reorganized, the program will now include management of the nation’s inventory of smallpox vaccine. Under the President’s proposal, CDC’s responsibilities for the stockpile would be transferred to the new department, but VA and HHS involvement would be retained, including continuing review by experts of the contents of the stockpile to ensure that emerging threats, advanced technologies, and new countermeasures are adequately considered. Although the proposed department has the potential to improve emergency response functions, its success is contingent on several factors. In addition to facilitating coordination and maintaining key relationships with other departments, these include merging the perspectives of the various programs that would be integrated under the proposal, and clarifying the lines of authority of different parties in the event of an emergency. As an example, in the recent anthrax events, local officials complained about differing priorities between the FBI and the public health officials in handling suspicious specimens. According to the public health officials, FBI officials insisted on first informing FBI managers of any test results, which delayed getting test results to treating physicians. The public health officials viewed contacting physicians as the first priority in order to ensure that effective treatment could begin as quickly as possible. The President’s proposal to shift the responsibility for all programs assisting state and local agencies in public health emergency preparedness and response from HHS to the new department raises concern because of the dual-purpose nature of these activities. These programs include essential public health functions that, while important for homeland security, are critical to basic public health core capacities. Therefore, we are concerned about the transfer of control over the programs, including priority setting, that the proposal would give to the new department. We recognize the need for coordination of these activities with other homeland security functions, but the President’s proposal is not clear on how the public health and homeland security objectives would be balanced. Under the President’s proposal, responsibility for programs with dual homeland security and public health purposes would be transferred to the new department. These include such current HHS assistance programs as CDC’s Bioterrorism Preparedness and Response program and HRSA’s Bioterrorism Hospital Preparedness Program. Functions funded through these programs are central to investigations of naturally occurring infectious disease outbreaks and to regular public health communications, as well as to identifying and responding to a bioterrorist event. For example, CDC has used funds from these programs to help state and local health agencies build an electronic infrastructure for public health communications to improve the collection and transmission of information related to both bioterrorist incidents and other public health events. Just as with the West Nile virus outbreak in New York City, which initially was feared to be the result of bioterrorism, when an unusual case of disease occurs public health officials must investigate to determine whether it is naturally occurring or intentionally caused. Although the origin of the disease may not be clear at the outset, the same public health resources are needed to investigate, regardless of the source. States are planning to use funds from these assistance programs to build the dual-purpose public health infrastructure and core capacities that the recently enacted Public Health Security and Bioterrorism Preparedness and Response Act of 2002 stated are needed. States plan to expand laboratory capacity, enhance their ability to conduct infectious disease surveillance and epidemiological investigations, improve communication among public health agencies, and develop plans for communicating with the public. States also plan to use these funds to hire and train additional staff in many of these areas, including epidemiology. Our concern regarding these dual-purpose programs relates to the structure provided for in the President’s proposal. The Secretary of Homeland Security would be given control over programs to be carried out by another department. The proposal also authorizes the President to direct that these programs no longer be carried out in this manner, without addressing the circumstances under which such authority would be exercised. We are concerned that this approach may disrupt the synergy that exists in these dual-purpose programs. We are also concerned that the separation of control over the programs from their operations could lead to difficulty in balancing priorities. Although the HHS programs are important for homeland security, they are just as important to the day-to- day needs of public health agencies and hospitals, such as reporting on disease outbreaks and providing alerts to the medical community. The current proposal does not clearly provide a structure that ensures that both the goals of homeland security and public health will be met. Many aspects of the proposed consolidation of response activities are in line with our previous recommendations to consolidate programs, coordinate functions, and provide a statutory basis for leadership of homeland security. The transfer of the HHS medical response programs has the potential to reduce overlap among programs and facilitate response in times of disaster. However, we are concerned that the proposal does not provide the clear delineation of roles and responsibilities that we have stated is needed. We are also concerned about the broad control the proposal grants to the new department for public health preparedness programs. Although there is a need to coordinate these activities with the other homeland security preparedness and response programs that would be brought into the new department, there is also a need to maintain the priorities for basic public health capacities that are currently funded through these dual-purpose programs. We do not believe that the President’s proposal adequately addresses how to accomplish both objectives. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information about this testimony, please contact me at (202) 512-7118. Marcia Crosse, Greg Ferrante, Deborah Miller, and Roseanne Price also made key contributions to this statement. Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains. GAO-02-610. Washington, D.C.: June 7, 2002. Homeland Security: Responsibility and Accountability for Achieving National Goals. GAO-02-627T. Washington, D.C.: April 11, 2002. Homeland Security: Progress Made; More Direction and Partnership Sought. GAO-02-490T. Washington, D.C.: March 12, 2002. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Need to Consider VA’s Role in Strengthening Federal Preparedness. GAO-02-145T. Washington, D.C.: October 15, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Homeland Security: A Framework for Addressing the Nation’s Efforts. GAO-01-1158T. Washington, D.C.: September 21, 2001. Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health Preparedness Programs. GAO-02- 149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 9, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01- 915. Washington, D.C.: September 28, 2001. Chemical and Biological Defense: Improved Risk Assessment and Inventory Management Are Needed. GAO-01-667. Washington, D.C.: September 28, 2001. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 14, 1999. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework. GAO/NSIAD-99-159. Washington, D.C.: August 16, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999. National Preparedness: Technologies to Secure Federal Buildings. GAO- 02-687T. Washington, D.C.: April 25, 2002. National Preparedness: Integration of Federal, State, Local, and Private Sector Efforts Is Critical to an Effective National Strategy for Homeland Security. GAO-02-621T. Washington, D.C.: April 11, 2002. Combating Terrorism: Intergovernmental Cooperation in the Development of a National Strategy to Enhance State and Local Preparedness. GAO-02-550T. Washington, D.C.: April 2, 2002. Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: March 28, 2002. Combating Terrorism: Critical Components of a National Strategy to Enhance State and Local Preparedness. GAO-02-548T. Washington, D.C.: March 25, 2002. Combating Terrorism: Intergovernmental Partnership in a National Strategy to Enhance State and Local Preparedness. GAO-02-547T. Washington, D.C.: March 22, 2002. Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness. GAO-02-473T. Washington, D.C.: March 1, 2002. Chemical and Biological Defense: DOD Should Clarify Expectations for Medical Readiness. GAO-02-219T. Washington, D.C.: November 7, 2001. Anthrax Vaccine: Changes to the Manufacturing Process. GAO-02-181T. Washington, D.C.: October 23, 2001. Chemical and Biological Defense: DOD Needs to Clarify Expectations for Medical Readiness. GAO-02-38. Washington, D.C.: October 19, 2001. Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness. GAO-02-162T. Washington, D.C.: October 17, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Terrorism Preparedness. GAO-01-555T. Washington, D.C.: May 9, 2001. Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement. GAO-01-666T. Washington, D.C.: May 1, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, DC: April 24, 2001. Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement. GAO-01-463. Washington, D.C.: March 30, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response. GAO-01-15. Washington, D.C.: March 20, 2001. Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination. GAO-01- 14. Washington, D.C.: November 30, 2000. Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training. GAO/NSIAD-00-64. Washington, D.C.: March 21, 2000. Combating Terrorism: Chemical and Biological Medical Supplies Are Poorly Managed. GAO/T-HEHS/AIMD-00-59. Washington, D.C.: March 8, 2000. Combating Terrorism: Chemical and Biological Medical Supplies Are Poorly Managed. GAO/HEHS/AIMD-00-36. Washington, D.C.: October 29, 1999. Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism. GAO/T-NSIAD-00-50. Washington, D.C.: October 20, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 14, 1999 Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/T-NSIAD-99-184. Washington, D.C.: June 23, 1999. Combating Terrorism: Observations on Growth in Federal Programs. GAO/T-NSIAD-99-181. Washington, D.C.: June 9, 1999. Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs. GAO/NSIAD-99-151. Washington, D.C.: June 9, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/NSIAD-99-110. Washington, D.C.: May 21, 1999. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency. GAO/NSIAD-99-3. Washington, D.C.: November 12, 1998. Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program. GAO/T-NSIAD-99-16. Washington, D.C.: October 2, 1998. Combating Terrorism: Observations on Crosscutting Issues. GAO/T- NSIAD-98-164. Washington, D.C.: April 23, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. Disaster Assistance: Improvement Needed in Disaster Declaration Criteria and Eligibility Assurance Procedures. GAO-01-837. Washington, D.C.: August 31, 2001. Chemical Weapons: FEMA and Army Must Be Proactive in Preparing States for Emergencies. GAO-01-850. Washington, D.C.: August 13, 2001. Federal Emergency Management Agency: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-832. Washington, D.C.: July 9, 2001. Budget Issues: Long-Term Fiscal Challenges. GAO-02-467T. Washington, D.C.: February 27, 2002. Results-Oriented Budget Practices in Federal Agencies. GAO-01-1084SP. Washington, D.C.: August 2001. Managing for Results: Federal Managers’ Views on Key Management Issues Vary Widely Across Agencies. GAO-01-592. Washington, D.C.: May 25, 2001. Determining Performance and Accountability Challenges and High Risks. GAO-01-159SP. Washington, D.C.: November 2000. | Since the terrorist attacks of September 2001, and the subsequent anthrax incidents, there has been concern about the ability of the federal government to prepare for and coordinate an effective public health response to such events. More than 20 federal departments and agencies carry some responsibility for bioterrorism preparedness and response. Emergency response is further complicated by the need to coordinate actions with agencies at the state and local level, where much of the response activity would occur. The President's proposed Homeland Security Act of 2002 would bring many of the federal entities with public health preparedness and response responsibilities into one department to mobilize and focus assets and resources at all levels of government. The proposed reorganization has the potential to repair the fragmentation in the coordination of public health preparedness and response at the federal, state, and local levels. In addition to improving overall coordination, the transfer of programs from multiple agencies to the new department could reduce overlap among programs and facilitate response in times of disaster. However, there are concerns about the proposed transfer of control from the Department of Health and Human Services to the new department for public health assistance programs that have both basic public health and homeland security functions. Transferring control over these programs, including priority setting, to the new department has the potential to disrupt some programs that are critical to basic public health responsibilities. The President's proposal is not clear on how both the homeland security and the public health objectives would be accomplished. |
Before implementing the nationwide 800-number service, SSA delivered most of its services to the public face-to-face in an SSA field office. In 1989, SSA implemented a national, toll-free 800 number to better enable individuals to request information on SSA programs or report events that affect their own or someone else’s SSA records or payments. SSA set up the 800-number service with the expectation that callers would ask basic questions and conduct simple business transactions, such as reporting address changes and scheduling field office appointments. When a call came into the 800 number, it would be routed to a local SSA call center. This strategy resulted in a high busy rate. Troubled by high busy-signal rates, SSA in 1996 added a nationwide automated menu to the 800 number that allowed callers to conduct a limited number of transactions without speaking to an agent. In 1997, we identified a number of conditions that limited the effectiveness of SSA’s 800-number service. For one, callers often reached a busy signal instead of the automated menu or an agent. In addition, the automated menu offered only a limited number of services. To reach an agent, callers were required to select a specific topic about which they wished to speak to an agent so that the system could direct their call to an agent in a call center with the requisite subject matter expertise. This routing strategy led to some call centers being overwhelmed with calls. Also, because agents could not transfer calls, callers sometimes were inconvenienced by having to redial the 800 number to complete their business. Since the introduction of its nationwide 800-number service, SSA has worked to keep pace with the public’s growing demand for telephone services and interests in conducting more complex transactions over the telephone. Today, calls made to the 800 number are answered at 44 geographically dispersed locations. A call placed to the 800 number may be answered by agents located in any one of SSA’s 36 teleservice centers, 6 Program Service Centers; or at one of 2 components within SSA’s Office of Central Operations. Figure 1 shows the locations of these call centers within the 10 SSA regions. SSA staffs its 36 teleservice centers with approximately 4,060 teleservice representatives who answer incoming calls to the 800 number. In addition, each of SSA’s six program service centers, which are co-located on teleservice center sites, has designated specialists, called “SPIKES,” who have been cross-trained to provide back-up support in answering 800-number calls during peak call volume periods. The SPIKE staff is comprised of various technical staff in the program service centers whose routine responsibilities include processing claims, mailing out notices, managing SSA’s debt collection activities, and handling reports of non- receipt of checks and representative payee issues. SSA employs a cadre of approximately 2,030 trained SPIKES in its six program service centers. When the volume of calls is expected to exceed the levels that teleservice representatives can handle, SSA activates SPIKES, diverting them from their routine responsibilities to answer incoming 800-number calls. These peak calling periods typically occur on the first day of the week, the first week of the month, and the first 3 months of the year. In this report, we refer to teleservice representatives and SPIKES as “agents” and to teleservice centers and program service centers as “call centers.” SSA’s Office of Telephone Services (OTS) plans, implements, operates, and evaluates SSA telephone service to the public delivered by way of the national 800 number and field offices. OTS plans and conducts studies, pilots, and analyses of 800-number and field office telephone operations to assess and improve the service. It also provides direct support to call centers and field offices, including developing and communicating uniform operating policies and procedures. OTS staff works closely with SSA’s vendor that supplies and manages the network hardware, software, and telephone equipment used to support the 800-number service. OTS also manages the 800-number network operations, designs and administers call routing plans, and monitors call handling and adjusts call routing to handle emergency situations. Full-time SSA agents spend much of their time answering calls. These calls may cover a broad range of inquiries about SSA programs and procedures. Figure 2 shows the 10 most frequent reasons for calls to the 800 number in fiscal year 2003. Agents’ time off the phone, such as for staff meetings, training sessions, or annual leave must be scheduled months in advance so that the network operations may continue without interruption. SSA sets goals for telephone access and agent services and measures performance in these areas. In recent years, to measure access, SSA calculated the number of calls handled, the number of calls that reach the 800 number on their first attempt, and the number of calls that reach an agent within 5 minutes of selecting the option to speak with an agent. In fiscal year 2005, SSA replaced these measures with two new access performance measures—the average speed of answer and the agent busy rate—consistent with standards in the telecommunications industry. SSA also expects agents to adhere to agency guidance and procedures and sets standards and measures agent accuracy (i.e., compliance with SSA’s requirements when serving callers) and agent courtesy. The Office of Quality Assurance and Performance Assessment (OQA) measures the accuracy of information agents provide callers by listening in daily to a statistical random sample of calls handled by agents nationwide. OQA assesses accuracy based on whether agents adhered to SSA requirements when responding to callers’ inquiries. As shown in table 1 agents are expected to provide callers a broad range of services. OQA also periodically surveys 800-number callers to assess, among other things, callers’ perception of agent courtesy. Despite making improvements to its 800-number systems, SSA still has difficulty keeping pace with caller demand for agent assistance. Since 2001, SSA has made improvements to its telephone systems, management, and services to improve caller access to the 800-number network. Specifically, the new enterprise-wide network improved incoming call routing and network capacity; enhanced SSA’s ability to manage network operations, forecast call volumes, and set staffing levels; and expanded automated and agent services. However, callers continue to demonstrate a preference for speaking with an agent over using the automated service menus. In fiscal year 2004, about 51 million callers requested to speak to an agent. Of these calls, 8.7 million, or 17 percent, of these calls did not get through to an agent—a 2 percent increase over the previous year. SSA upgraded the network to help overcome past access problems. One major upgrade was the replacement of the geographically based routing system with a nationwide routing system capable of distributing calls to any agent within the network. This change gave SSA the ability to monitor call traffic and agent availability in real time at each call center and receive “cradle to grave” management information on a call’s movement from the time the caller dials the 800 number until the call is terminated. The network also effectively eliminated the busy signal that callers encountered when using the older system. The new system accepts all calls made to the 800-number network and provides callers with a broad range of automated services. Calls seeking agent assistance are distributed to 1 of SSA’s 44 answering sites. When callers dial the 800 number, the network provides a series of prompts to direct them to the desired services. The network uses recorded announcements and pre-set menu prompts to separate callers according to language preference (i.e., English or Spanish) or type of telephone service (i.e., touchtone or rotary dial). The network uses a digitized voice to read menu selections to the caller and responds to caller-entered touch-tone digits. The caller’s selection can invoke a number of options, such as playing a recorded announcement (e.g., on cost-of-living adjustments) or transferring a call to an agent. SSA provides callers with an extensive menu of available automated services before offering them the option of acquiring agent assistance. SSA told us that the menus were set up this way to offer callers an opportunity to conduct their business using automated services before forwarding their calls to agents. When a caller indicates a preference for agent assistance, the network determines the optimum destination for the call. It reviews among other factors agent availability, the number of calls in queue, and the minimum expected delay. If all agents are busy and call queues are filled to capacity, the network delivers an agent busy message to callers, advising them that heavy call volume prohibits the transfer of their call to an agent and encouraging them to call back during periods of typically lower call volumes. A call placed in agent queue remains queued until an agent becomes available. The network applies treatments to calls waiting in agent queue, such as announcements promoting the use of SSA’s Web site. According to SSA, if the wait time in an agent’s queue exceeds 15 minutes, the call is re- routed to another agent and given priority over other incoming calls. The network continually tracks the status of each call until the caller disconnects. Although the network was designed to hold one call per agent in queue, the vendor told us that it typically holds up to 1.65 calls in queue per agent. SSA and the vendor have taken measures to ensure the integrity of network-generated data and the continuous operation of the network. Both SSA and the vendor conduct ongoing tests of the accuracy and completeness of the network-generated data on which so much of SSA’s 800-number related performance measurement, management decisions, and staffing levels depend. The vendor told us that redundancy was built into the network to ensure that the failure of any one component only affected existing calls. For example, if one component fails, the network automatically employs a backup execution path to bypass the problem location and reroutes calls to one of the remaining call centers. According to the vendor, the redundancy built into the 800-number network and the geographical dispersion of its redundant functions would make a complete system outage almost unimaginable. Vendor staff told us that the local outages that occur on occasion are mainly caused by loss of network facilities, extended local power failures, or hardware issues. SSA and the vendor maintain back up databases critical to network operations. SSA takes several additional steps to help ensure that callers can access 800-number services. SSA network operations staff frequently calls the 800-number network to test the integrity of the main menu scripts and the routing of calls to both automated and agent services. They evaluate calls for proper routing through the option choices; proper functionality of the automated scripts; proper functionality of routing to agents; proper routing to agents and agent queues; and the quality and clarity of the connection. Call centers also have systems administrators who monitor the performance of the equipment used on the premises and notify headquarters when any anomalies appear. System administrators are responsible for keeping the phones and headsets working, troubleshooting problems with desktop applications, monitoring computers, printers, or management information data. If the administrators notice any problems, they are responsible for notifying headquarters so that the vendor can dispatch a technician to initiate repair. SSA takes advantage of the wealth of management information at its disposal to monitor ongoing network operations and plan for the future. SSA forecasts call volumes and schedules the appropriate number of agents in accordance with anticipated demand based on historical data. These forecasts allow SSA to group days into specific levels depending on the anticipated volume of calls. For example, the busiest days—”Level 1” days—require the greatest number of SPIKES to be activated to answer phones. SSA sets and tracks SPIKE commitments to help ensure that enough SPIKES will be available networkwide to answer the volume of incoming calls. Depending on network conditions, managers may make adjustments to the number of available agents and the routing of calls to align available 800-number resources with caller demand. SSA adjusted its call volume forecast downward 5 times each in fiscal years 2003 and 2004, allowing SPIKES scheduled to answer 800-number calls to return to their other assigned duties. SSA uses real-time data to monitor call traffic, caller activity, and system performance. SSA can use these data to track overall incoming calls and information on automation or determine whether calls were routed to an SSA call center or to a busy message. SSA monitors such 800-number network statistics as calls made to the network, calls offered to agents, agent staff levels, average speed of answer, and agent busy rate. Staff also monitors cable and national news for events, such as inclement weather, news stories on Social Security, or homeland security events to determine what impact they may have on projected 800-number call volumes. Furthermore, SSA monitors caller usage of the automated menus and reshuffles automated options to keep the most popular options first. SSA performs limited checks of the network generated data. Upon receiving the data electronically, SSA runs the data through a multistep automated procedure that backs up the data and converts it to a readable format. As part of this process, SSA checks each record to ensure that all area codes are good, all phone numbers are properly formatted, and all listed phone numbers originate in the 800-number network. The vendor also generates separate reports on automated services and agents. SSA reviews the reports and compares the results with historical trends. Although SSA has no additional means of verifying the reliability of the vendor-provided data or the results that appear in report field outputs, both SSA and the vendor maintain that these data are accurate, and the vendor states that SSA has the source data it needs to assess network performance. Since the inception of the nationwide 800 number and the later introduction of limited 24-hour automated services, SSA has continually improved the quality and quantity of services available to callers. In 1996, SSA introduced voice-recognition applications and added an option allowing callers to replace their Medicare card by phone. In 1998, SSA implemented five new automated service options to handle inquiries surrounding the increased number of Social Security statement mailings. By 2002, SSA had made the full range of automated services available in the Spanish language. Callers may access the automated services at any time in English or Spanish to obtain services, information, or forms. Table 2 lists the services available through the 800-number automated menus. SSA has adopted the telephone industry “best practice” of taking care of all of the caller’s business during the initial contact. Agents have been trained to answer a wide range of inquiries and have the capacity to transfer calls they cannot handle to others who handle these calls. For example, in 1998, SSA began allowing callers to file claims for retirement and survivors’ benefits immediately through the 800 number, eliminating the need for the caller to leave a message and wait for another SSA agent to return the call. In 1999, SSA gave agents access to a computer-based application to assist them in handling telephone calls more efficiently. In 2002, SSA provided callers the option of having their call routed to a designated group of bilingual agents. SSA also extended the hours of agent availability nationwide. Agents are now available weekdays from 7 a.m. to 7 p.m. in each time zone. In addition, SSA provides unadvertised agent service for extended hours on weekday nights and weekends. SSA also provides agent service for the hearing impaired through a separate toll- free number. In following SSA’s instructions to handle all of the caller’s business needs, agents may be performing tasks that limit their availability to answer calls. During site visits, we observed agents who filled out forms by hand, retrieved printouts, placed the mailings in an envelope, addressed the envelopes by hand, and put the envelope in the mail slot, while the caller remained on hold. While these steps may help give callers the assurance that their business is being completed, such manual tasks are time- consuming and potentially limit the number of calls that agents can handle. Although the number of calls placed to the 800 number has increased slightly since fiscal year 2002 and SSA has expanded services available through automation, the agency continues to have difficulty keeping pace with caller demand for live agent assistance. Figure 3 shows the calls made to the 800 number since fiscal year 2002 when SSA’s most recent telephone network upgrade was fully implemented. The proportion of calls to the 800 number indicating a preference for agent assistance has been relatively consistent, whereas SSA had hoped that the introduction of automated services would divert calls away from agents to the less costly, self-service automated system. Such a reduction would be consistent with the call center industry trend toward self-service with minimal agent intervention. However, agents continue to answer the majority of calls, including some calls that, according to agents, could easily be handled through automation. The percentage of calls seeking agent assistance but not getting through declined from 22.7 percent in fiscal year 2002 to 15.2 percent in 2003, but rose 2 percent in fiscal year 2004. Specifically, as figure 4 shows, 8.7 million (or 17.2 percent) of the 51 million calls seeking agent assistance in fiscal year 2004 did not get through. About half of these calls encountered a busy message and the other half abandoned the call while waiting in queue. Managers of private call centers do not place a lot of importance on call abandonment rates for several reasons, including their belief that callers terminate calls to visit the Web site. Some callers that request SSA agent assistance may be able to satisfy their needs through the automated menu or Web site. However, if callers’ business require agent assistance, they will not be able satisfy their needs if they unable to get reach an agent. SSA offers a variety of possible reasons why callers abandon their calls after being placed in queue for an agent, one being that customers simply do not want to continue waiting any longer before having an opportunity to speak to an agent. SSA has several initiatives underway to reduce the number of abandoned calls in queue, including a call-back service, which will provide callers kept in queue beyond a certain threshold with an opportunity to enter their telephone number and select a contact time so that an agent can call them back. While providing convenience to callers and potentially using any agent “down” time more efficiently, a call back option also has the potential to increase agent workload. Since 2002, SSA’s 800-number automated menus have received progressively higher call volumes but handled fewer calls to completion. In addition, as shown in figure 5, the number of calls being abandoned without completing a transaction in the automated menus has steadily increased, culminating in fiscal year 2004, when nearly half of calls to automation were abandoned. Although SSA offers a number of possible reasons, it is unable to say with any degree of certainty why calls continue to be abandoned. In the past, SSA has conducted follow-up caller surveys to ask callers what had prompted them to abandon the automated services. The primary reason that callers gave for hanging up after an initial selection of an automated service was their desire to speak to an agent. According to SSA, many callers simply desire the security of human contact when leaving personal information that is required to transact business. SSA has now eliminated the need for callers to redial; callers may now have their calls transferred from automated services to agent queue. However, this option will likely increase agents’ call burden. SSA intends to make its automated menu selections more accessible by introducing a speech-enabled main menu that would allow callers to simply speak their needs in response to directed questions. For example, rather than listening to a list of options, callers will be able to use their voice to narrow down available options and find the ones relevant to the services they seek. SSA plans to implement this feature nationwide later in this fiscal year. SSA also redesigned its Web site in 2003 to improve its accessibility and usability in the hope of relieving the burden on the 800 number. The Web site now attracts over 30 million visitors a year, which SSA says has reduced the demand for direct service from 800-number and field office agents. SSA’s customer satisfaction surveys from 2002 and 2003 show that the percentage of the survey respondents who said they would likely use the 800 number the next time they contacted SSA decreased from 75 to 61 percent. In contrast, the percentage of respondents who reported they were likely to use the Internet or email to contact SSA increased by 2 percent and the percentage of those who said that they would likely call or visit a field office increased by 10 percent. SSA has taken steps to help agents provide callers accurate information and comply with agency requirements, but still has problems with agents meeting its standards for accurate service. SSA provides agents with comprehensive training and equips them with on-the-job resources to help them provide accurate and consistent service. In addition, SSA monitors agents’ calls and compiles agencywide assessments of agent accuracy in handling calls and identifies agent training needs. SSA’s own monitoring assessments for 1998 through 2003 found that the agency generally met its standard for agent accuracy in handling issues that had the potential to affect individuals’ benefit payments, but not its standard for “service accuracy,” handling issues that did not have the potential to affect benefits. SSA’s overall performance for “service accuracy” for fiscal year 2003 was 85.1 percent; below SSA’s 90 percent target. According to SSA’s assessment, agents’ failure to obtain the required identifying pieces of information from callers to verify their identity before accessing and disclosing information was the most frequently committed service error. In fiscal year 2003, this error alone accounted for 28 percent of all service errors that SSA identified. SSA has taken several actions to help agents improve their performance, but these actions have not resulted in sustained improvements in service accuracy. SSA provides agents with comprehensive training to enable them to offer callers a broad range of services and to complete callers’ business on initial contact. The basic training curriculum is comprised of formal course work to teach agents about the agency’s programs, policies, and procedures, including rules for disclosing information to and accepting reports from callers; how to access, interpret, and enter data into SSA computer systems and databases; and how to query and interpret SSA records. As part of their basic training, agents take frequent tests, conduct mock interviews, observe experienced agents handling calls, and answer calls. The basic training curriculum for full-time agents at the call centers we visited ranged from 11 to 16 weeks. In addition, call center officials told us that they taught a modified 11- to 12-week course to back-up agents to augment their existing technical skills. In addition, officials told us that they supplemented the basic training with regional and call center training offerings, such as new employee orientation, diversity training, and public service training. After agents complete basic training, regions and call centers follow their own established practices to help agents transition to handling calls on their own. At the sites we visited, agents were mentored or closely supervised during a transitional period. For example, some call centers assigned a personal mentor to sit and observe agents handling calls and to provide prompt assistance, as needed. After spending a number of weeks with a mentor, agents are evaluated to determine their readiness to handle calls on their own. As another transitional step, one call center placed agents in a training unit that had a higher supervisor-to-agent ratio to allow closer supervision and monitoring of agents’ work. Floor support staff in one training unit said that, in addition to providing technical assistance, they review the accuracy of agents’ data entries for events, such as direct deposit requests and death reports. Based on an agent’s proficiency, floor- support staff may review agents’ work to provide daily feedback or review their work less frequently as agents demonstrate proficiency. Agents may receive subsequent training in a variety of ways. For example, training can occur during the 3-hour allotments reserved for monthly staff meetings. Call center staff and officials told us that these meetings were used as a forum to provide agents information on emerging issues such as national and regional initiatives and changes in operating procedures, as well as feedback on the call center’s performance. During the workday, supervisors may provide agents with important information that agents need to know, such as generic responses to calls triggered by current media reports on Social Security solvency. We were told that agents also receive voluminous intra-agency communications for which they may be allotted 15 minutes at the end of each workday to read. We were also told that supervisors and floor-support staff use various strategies to ensure that agents are aware of the most important changes. Call center managers and supervisors told us, that if needed, more time maybe requested for agents to be off the telephones to receive additional training, such as hands-on computer training. To assist agents in providing callers with accurate and consistent services, SSA provides agents with the Customer Help and Information Program (CHIP)—a customized online computer application for providing services to 800-number callers. CHIP helps agents navigate the comprehensive set of requirements and guidance for SSA programs and directs agents in the actions they should take to accurately complete callers’ business on initial contact. For example, if an agent enters an address change for individuals receiving Supplemental Security Income (SSI) benefits, CHIP displays screens prompting the agent to ask callers a series of questions about changes in living arrangements—events that may lead to an increase or decrease in SSI benefits. As another resource, the call centers we visited made more experienced staff available to help agents handle more complex or technical calls. Officials told us that such floor support was customary at call centers agencywide. SSA monitors agents’ handling of 800-number calls for payment accuracy and service accuracy. SSA assesses agent performance for payment accuracy in cases where agents’ responses on such matters as eligibility, filing of claims, or reportable events could potentially affect an individual’s eligibility or benefits. SSA also assesses agents’ performance for service accuracy to determine whether or not the services they provide correspond with SSA policies and procedures. When assessing service accuracy, SSA considers whether agents provided accurate information as well as performed all other related actions that the agency requires. Some of these actions are required as a matter of convenience to callers or to avoid the potential need for follow-up contact. SSA conducts random, remote monitoring of agents handling calls for various purposes. OQA is responsible for two types of monitoring. First, OQA monitors a statistical national sample of calls handled by agents throughout the year to develop both agencywide and regional assessments of agent performance. This type of monitoring serves as SSA’s means of assessing agent payment accuracy and service accuracy. OQA officials told us that such monitoring had the capacity to reveal issues that needed to be addressed at the agency level, such as pinpointing areas needing policy clarification. However, the responsibility for agent performance, including improving performance to meet agency targets, rests with the various regions and individual call centers. Second, if requested by regional officials, OQA occasionally monitors a small number of calls handled by individual call centers and visits the call centers to brief managers and agents on its findings. Call center staff also randomly monitor calls handled by their call center for payment accuracy and service accuracy and to identify training needs for their agents. SSA does not specify the number of calls that should be monitored for this purpose. Call center officials told us that the number of calls they monitored do not provide a statistically valid assessment of their center’s performance. Designated call center personnel also monitor individual agents to provide agents individualized feedback on their telephone performance. Monitors may point out positive aspects of agents’ performance as well as suggest additional training. Agents are given advance notice of when monitoring will occur and are allowed to choose whether to have monitors sit with them or to have monitors listen in from a remote location. For full-time agents, SSA guidance recommends monitoring as many as five calls per month for agents with more than 1 year of experience and unlimited calls for agents with less than 1 year. Officials told us that agents are given timely feedback on assessments of their overall performance. Some officials also said that when monitors observe agents making an error, they may interrupt the call to instruct the agent on the correct procedure. Although SSA takes a number of actions to help agents provide callers accurate information in accordance with agency policies and procedures, agents still have problems meeting SSA’s standard for service accuracy. As shown in figure 6, from fiscal year 1998 through fiscal year 2003, SSA generally met its 95 percent target for payment accuracy—having agents correctly handle inquiries involving eligibility and benefit payment issues—but not its 90 percent target for service accuracy—having agents serve calls related to nonpayment –related issues according to agency requirements. SSA reported that its overall performance for payment accuracy in fiscal year 2003 was 95.9 percent, and the performance for each of its 10 regions was similar. However, SSA reported its overall performance for service accuracy in fiscal year 2003 was 85.1 percent. Based on OQA’s assessment, as few as four regions may have met the 90 percent service accuracy target in 2003. As shown in figure 7, for fiscal years 2001 through 2003, almost all regions had problems consistently meeting SSA’s established target for service accuracy. OQA identified 63 types of required actions that agents failed to take in fiscal year 2003 that led SSA to miss its service accuracy target. Agents’ failure to take these required actions resulted in service errors. As shown in table 3, the most frequent error stemmed from agents’ inadequate protection of individuals’ personal information. SSA protects individuals’ privacy by limiting disclosure of the personal information in its records to individuals for whom the agency maintains the records and to others authorized. Agents committed an error each time they failed to collect the requisite six identifying pieces of information to verify a caller’s identity before accessing or disclosing information from SSA records (i.e., improper handling of access and disclosure). Managers at the sites we visited have taken actions to reduce the number of service errors, particularly access and disclosure errors. For example, some call centers provided CHIP refresher training, designed desk aids reminding agents of the steps for proper disclosure, placed “hot pink” sheets detailing the service errors on the desks of agents who commit them, or established a “CHIP doctor” to provide agents with technical assistance to help navigate the CHIP computer application. However, the effectiveness of these actions to improve service accuracy for agents within the respective call center is unknown because the monitoring that occurs at the call center level does not provide a statistically valid measurement to make such an assessment. OQA has reported that the lower service accuracy rate primarily stemmed from agents’ failure to follow SSA’s requirements when asking callers to verify their identities. Assuming that such “access and disclosure” failures could be cleared up through the use of the CHIP application, SSA mandated its use in November of 2001. The service accuracy rate subsequently improved for fiscal year 2002, but dropped the next fiscal year because, according to OQA, agents did not make optimal use of CHIP. OQA recomputed SSA’s service accuracy rate without the access and disclosure error for comparison purposes and reported that it would have increased from 85.1 to 89.2 percent in fiscal year 2003. SSA has not determined why agents fail to follow agency procedures when handling some calls, resulting in service errors. SSA uses training, call monitoring, and surveys to ensure that agents deliver courteous service, but does not routinely document or analyze all incidents of discourtesy or caller complaints. As part of its comprehensive, multiweek training curriculum, SSA teaches agents the interviewing and interpersonal skills they need to provide courteous service. It also determines through monitoring whether agents are being courteous. Based on its monitoring results from 2001 through 2003, SSA reported that it found agents to be courteous to callers over 99 percent of the time. SSA also measures caller satisfaction with agent courtesy as part of its annual 800-number customer satisfaction survey. In 2004, 91 percent of respondents rated agent courtesy as good, very good, or excellent; 5 percent rated agent courtesy as fair, and 4 percent rated it poor or very poor. While SSA uses training, monitoring and customer surveys to ensure courtesy, it does not have a uniform system for analyzing incidents and complaints of discourtesy. Call center staff told us that they typically apologize to callers and offer to provide the desired assistance whenever callers lodge complaints by phone. Moreover, they may not record the complaint or attempt to capture and assess information on the nature of complaints. Customer service studies highlight the importance of paying attention to complaints and the benefits of having a good management complaint system. As part of its comprehensive, multiweek training curriculum, SSA teaches interviewing and interpersonal skills to help agents serve callers in a professional and courteous manner. The training includes instruction on how to establish rapport with callers, how to obtain information necessary to accurately serve callers’ needs, and how to end calls on a positive note. As a courtesy measure, agents are instructed to allow callers to end the call. Agents also receive training on how to respond to angry, loud, or abusive callers, including how to calm such callers, and how to continue serving them or to transfer those calls to supervisors. SSA also uses its call monitoring process to oversee courtesy levels and has procedures for immediate intervention to remedy any observed problem. OQA procedures call for monitors to immediately inform management of a discourteous incident, prepare a written report for the agent’s call center manager, and retain a copy of the report in the event that a disciplinary action is taken against the agent. Call center managers who become aware of discourtesy allegations or observe agent discourtesy are required to follow similar procedures. They are required to discuss any incident with the agent and consider a progressive range of disciplinary actions from issuing a reprimand to terminating an agent’s employment. OQA officials told us that formal monitoring is time-consuming work. As a result, OQA said that over the years, it reduced the sample size of the monitored calls due to resource constraints. Regional and call center management expressed varied opinions as to whether the reduction in the number of monitored calls was an obstacle to identifying agent discourtesy. One call center manager told us that discourtesy was more likely to be observed by managers and supervisors patrolling work areas than through formal monitoring. On the other hand, one regional official noted that additional unannounced monitoring would be a more effective way of catching agent rudeness. Some of the managers and officials with whom we met, however, told us that they believed courtesy levels were very high and not a problem. According to the agency’s call monitoring records, SSA agents have performed at consistently high rates with regard to courtesy. For fiscal year 2003, OQA determined that based on 4,384 calls, agents had been courteous to callers 99.9 percent of the time. It reached similar conclusions from its 2001 and 2002 monitoring. SSA also relies on its annual survey of callers to assess and ensure agent courtesy. Callers who have used agent services have been asked, among other questions, to rate agent courtesy on a 6-point scale. The 2004 survey showed that 91 percent of the callers rated agent courtesy as being good, very good, or excellent; 5 percent rated it as being fair; and 4 percent rated it as being poor or very poor. These rates were about the same as those reported for the 2001 though 2003 surveys. Other call centers may use telephone or online surveys to obtain feedback from customers, although the actual administration of the surveys may vary. For example, one organization conducts telephone surveys using voice capture software to record customer responses. At the beginning of a call, the survey system randomly selects participants and asks them to participate in a 2-3 minute survey after they complete their call. Another organization conducts online surveys, sending a survey to selected customers via email. Nonetheless, these organizations seek to obtain customers’ views on their organization’s performance. SSA monitors calls and receive feedback from customers, but it does not systematically gather and assess this information to identify courtesy problems, such as particular problem locations or persistent patterns or trends. SSA agents handled an average of 40.9 million calls each year from 2001 through 2003. Even if agents were courteous 99.9 percent of the time as OQA reported, for fiscal year 2003 that would still leave nearly 60,000 calls in which the agents may have been discourteous. However, because SSA does not routinely analyze the details of agent discourtesy observed through monitoring, it has no way of determining the circumstances or lessons learned from monitored calls. Studies conducted on customer service have shown that building relationships with customers and a having first-rate management complaint system are critical to maintaining good customer relations. One study in particular noted that paying attention to customer complaints, regardless how minor they may be and addressing them quickly and completely helps satisfy customers and build trusting relationships. Similarly, people who contact their government agencies want to be heard and expect courteous and respectful treatment. It is therefore important for government employees to distinguish what their customers want and to take actions to ensure that their customers are satisfied. The study also noted that no matter how good the service or product is, occasions will invariably arise that result in customer complaints. However, it is important that when criticisms are voiced, that they are systematically and promptly addressed. A good management complaint system can provide data and information on complaints that can be compiled and analyzed to give insight into where problems are recurring and what needs to be done to fix them or prevent them from happening in the future. A good complaint system also facilitates the filing of complaints using simple, yet comprehensive complaint forms. SSA’s 800-number customer satisfaction surveys are one means of gathering feedback from callers on agent courtesy. However, the survey does not ask why some respondents rate agent courtesy as poor. In addition, the agency does not routinely collect or analyze all caller complaints placed through the 800 number. Our visits to call centers found variation in how they handled such calls. When customers call the 800 number to report agent-related complaints, SSA guidance requires agents to refer calls to supervisors or floor-support staff. However, SSA does not provide guidance for how those receiving referrals should handle them. We were frequently told that call center staff receiving these calls typically apologize for the other agents’ rudeness and offer to provide service to the caller. SSA provides call center staff a form to document 800-number service complaints, including agent lack of courtesy. However, SSA has not provided them agencywide guidance on documenting complaints or the type of information they should record to allow SSA to identify service issues or trends. We were given a variety of reasons why call center staff may not document agent-related complaints. One call center official told us that his site allowed agents to exercise judgment in deciding which complaints they documented. Some agents, supervisors, and technical staff told us they were unaware of procedures for handling such complaints while others believed callers needed to provide sufficient information such as the offending agent’s name or call center location to lodge a formal complaint. It was our observation that 800-number agents may not provide their full name or mention their call center location when answering a call. SSA responds differently to customer-reported complaints sent to agency offices than to complaints registered on its Web site. Specifically, regional and call center officials said that when warranted they would attempt to identify the agent, investigate the merits of the complaints reported to their offices, and initiate disciplinary actions. Headquarters staff who receive complaints through the agency’s Web site told us that they routinely send customers a letter of apology, but have no one designated within SSA to whom to forward and or resolve complaints. Although the Web site has an Intranet-based form ostensibly designed to capture complaint information, it does not ask for specific information such as the nature of the alleged act of discourtesy and the date and time it occurred. By not systematically collecting and analyzing information on alleged agent discourtesy, SSA is unable to identify service issues that may warrant corrective actions. SSA’s toll-free phone service is an important resource for the thousands of people who call the number daily, and the steps the agency has taken in recent years demonstrate a commitment to quality service. The agency’s upgraded telephone system along with its expanded menu options and agent-assisted services has improved access in many respects by giving customers more services at their convenience. In addition, the agency has taken steps to help ensure that callers receive more accurate and courteous service. Even with good service, however, there is room for improvement. Improvements to the 800-number network have not necessarily ensured that callers receive the help they seek, given the number of calls not reaching an agent. This may not be a mounting problem if callers follow the general trend in the call center industry toward automation and self- service as they grow more comfortable with these options. However, the impending increase in the size of the retiree and disability populations, and anticipated changes to the Social Security system suggest that SSA may continue to experience a substantial proportion of callers who request agent assistance. Measures to improve customer access to agents may therefore be needed. In addition, SSA’s many benefit programs will continue to generate some complex questions that require agent assistance. Currently, the prevalence of service accuracy errors diminishes the quality of service that callers receive when they do reach an agent through the 800 number. Finally, although SSA’s estimates show instances of agent discourtesy to be rare among all calls, such instances could nonetheless affect tens of thousands of callers. Because SSA does not routinely capture information on all customer complaints about discourtesy, however, it loses the ability to assess the severity of the problem and misses opportunities to better understand caller needs, solve unanticipated problems, and retain the good will of the public. To improve the quality of the 800-number telephone service, we recommend that the Commissioner of the Social Security Administration take the following three steps: Identify cost-effective ways that will help ensure that more calls seeking agent assistance get through to agents, such as streamlining the call-handling process, automating some mailings that agents now do by hand, or increasing number of agents available to take calls. Conduct a comprehensive analysis of the source of service errors. For example, the agency might consider holding agent focus groups to gain insight into why agents tend to fail to comply with certain requirements. The agency could get agents’ views on the effectiveness of CHIP in helping them meet agency requirements. Establish procedures for documenting and assessing customer- reported complaints. In doing so, the agency should determine the types of information it needs to assess customers’ concerns and to provide the agency a means to identify and address service issues. We obtained written comments on a draft of this report from the Commissioner of SSA. In its comments, SSA said it was pleased that our report reflected the agency’s commitment to providing high-quality 800-number telephone service that meets the needs and expectations of its customers. SSA agreed with our recommendation to identify cost-effective ways to increase agent availability to handle 800-number calls and described several planned initiatives to improve agent productivity and to expand automated services. SSA also agreed with our recommendation to conduct a comprehensive analysis of the source of agent service errors. Accordingly, SSA said it would convene a workgroup to obtain feedback on the source of agent service errors and make recommendations as appropriate to improve the agency service accuracy level. SSA disagreed with our recommendation to establish procedures for documenting and assessing customer complaints. SSA said that its findings that agent courtesy levels are consistently high demonstrate that its present approach to ensuring agent courtesy—which combines training, monitoring, and customer surveys—is working. Moreover, SSA said that based on its experience with prior initiatives, a nationwide reporting system would require heavy resource expenditures and be cost prohibitive given current budget constraints. Furthermore, SSA stated that any use of agent resources to document complaints would be counterproductive to improving caller access to agent services. While we agree that agent courtesy levels are high and state this in the report, given the sheer volume of 800-number calls SSA receives, even relatively small percentages of callers encountering agent discourtesy could result in tens of thousands of callers not getting the service they deserve. Thus, we believe that SSA can benefit from having uniform procedures for documenting and assessing customer complaints and have added additional information for further clarification. Experts believe that paying attention to customer complaints, however minor, and working to quickly resolve them is important to building relationships with customers. In addition, having information on complaints helps identify recurring problems and potential fixes as well as help prevent future occurrences. Under SSA’s current practices, because the decision to document a complaint lies with the individual agent handling the call, customers contacting the 800 number have no assurance that SSA will review the merits of their complaints. Routinely documenting and assessing customer-initiated feedback could help the agency identify areas of concern to callers and reinforce the agency’s commitment to providing quality “citizen centered” service. While we understand SSA’s concerns about resource constraints, we maintain that SSA can implement a system to document complaints using existing mechanisms, such as its 800-number feedback form and Internet form for complaints reported to its 800 number and Web site, respectively. As we state in the report, SSA already devotes time and staff to the documentation and handling of customer-reported complaints; however, such efforts are not done routinely. SSA states that its agents provide more efficient service when they keep the caller on the phone until the caller’s business and all agent actions are completed. We believe routinely documenting callers’ concerns would take no more time than completing callers’ other business. Further, the information could be collected uniformly in an electronic format that would facilitate analysis that could be used to improve service. As others have pointed out, a good system for managing complaints should be comprehensive, yet simple. Finally, we believe that understanding and responding to customer complaints are integral to the delivery of quality customer service. SSA’s comments are reproduced in appendix II. SSA also provided technical comments, which we have incorporated in the report as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the interested congressional committees and the Commissioner of SSA and will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (201) 512-7215 or bovbjergb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objectives of this study were to evaluate SSA’s actions for ensuring callers have ready access to 800-number services and receive accurate and courteous service from agents. To do this, we reviewed published works that included the National Performance Review benchmarking reports to identify industry benchmarks in areas key to our work and issues surrounding call center services. We also reviewed GAO and the Office of the Inspector General (OIG) reports and SSA annual performance plans to identify what is currently known about SSA telephone service operations. To evaluate the quality of the 800-number service, we compared telephone system performance data compiled by a contractor for SSA and SSA’s Office of Quality Assurance and Performance Assessment (OQA) assessments of agent accuracy and courtesy to SSA’s established standards and, where applicable, to industry benchmarks. We used performance data from OQA’s monitoring of agents for fiscal years 1998 through 2003 and from OQA’s 800-number customer satisfaction surveys conducted fiscal years 2001 through 2003. We reviewed OQA’s management reports for these activities. To develop information on the actions SSA takes at the headquarters level to ensure quality 800-number telephone service, we reviewed documents related to (1) SSA’s forecasts of call volumes and projected staffing levels for auxiliary agents; (2) services offered using the automated menu; (3) vendor-contracted services for the 800-number telephone systems hardware, software, and performance data; and (4) requirements for training agents, monitoring agent performance, and agent courtesy to callers. We interviewed officials in the Office of Telephone Services to obtain an understanding of the general operation of the 800-number telephone system including the routing of calls; the compilation of performance data; and SSA’s actions to monitor the performance of the 800-number system and of the vendor. We also interviewed OQA officials to obtain more detailed information on procedures for monitoring agents and surveying 800-number callers. In addition, we reviewed some complaints reported by the public over the agency’s Web site and interviewed officials in the SSA’s Center for Program Support to discuss practices for handling complaints. We visited six call centers to observe the 800-number service operations at the regional and call center levels. At the locations visited, we observed officials monitoring their centers’ call traffic and agent availability in real time, officials monitoring agents handling live calls, and agents handling live calls from customers. We reviewed documentation call center officials provided on agent training, monitoring of agents, agent-related complaints received, and disciplinary actions taken against agents. We interviewed regional and call center officials having line-management, supervisory, floor support, monitoring, and call-handling responsibilities to obtain information on call center operations and their experiences in providing telephone services and serving the public. The call centers we selected varied in the frequency and volume of calls they handled—three handled calls routinely and three on a back-up basis—and are not representative of call centers SSA-wide. To assess caller access and the reliability of the 800 number, we interviewed SSA officials and contacted selected vendor staff to obtain documents and data on the 800-number management and operations. SSA uses the management data and information supplied by the vendor to track all calls and transactions on the network, including data on overall incoming calls and information on automation and to determine whether calls were routed to an SSA call center or to a busy message. The vendor’s reporting system has internal alarms running on each server and an application that periodically checks each server’s vital functions, capacity, and environmental operating conditions against a predetermined set of normal operational conditions. Upon receipt, SSA runs the vendor- supplied data through a multistep automated procedure that backs up the data, creates data storage files, extracts data to be stored in other datasets, and recreates the data in a readable format. As part of this process, SSA checks each record to ensure that all area codes are good, all phone numbers are properly formatted, and all answering telephones originate in the 800-number network. The vendor also generates separate reports on automated services and agents. SSA reviews the reports and compares the results with historical trends. Although SSA has no additional means of verifying the reliability of the vendor-provided data or the results that appear in report field outputs, both SSA and the vendor maintain that these data are accurate and the vendor states that SSA has the source data it needs to assess network performance. We reviewed SSA performance data related to access and determined the data to be sufficiently reliable for the purposes of this report. To assess the reliability of OQA’s monitoring assessments of agents’ performance, we examined data reliability issues identified in an OIG report and interviewed OQA officials knowledgeable about the monitoring process and resulting data. In addition, we reviewed documentation and training materials, including monitoring instructions, evaluation data entry forms and desk aids, corrective action and evaluation feedback forms, and information regarding the statistical sampling of calls. In evaluating OQA’s sampling and weighting methodology, we determined that OQA’s methodology for monitoring agents’ payment and service accuracy appears to adequately represent the population of telephone calls. Approximate confidence intervals were produced by OQA using standard formulas for proportions based on a simple random sample. As OIG previously reported, we also found that decisions regarding payment accuracy and service accuracy continue to be unverifiable because SSA does not maintain documentation of all monitored calls. We determined that the data were sufficiently reliable for our purposes, given these limitations. To assess the reliability of the survey of 800-number callers, we interviewed OQA officials about the survey and resulting data and reviewed documentation on the survey methodology, sampling, response rates, and sampling variability. We also reviewed a report contracted by the OIG regarding this measurement of customer satisfaction. This report concluded that the 800-number caller survey produced a reliable measurement of callers’ views of agent courtesy for the period measured, but that because the survey was administered only twice a year, it was unlikely that the survey results matched the true customer satisfaction across the entire year. Because the survey was recently limited to being conducted during a single 4-week period in March, we found that the survey results continue to be unrepresentative of callers’ responses throughout the year. We believe that seasonal events could affect customer satisfaction in different ways throughout the year. The survey response rate during the period 2001 through 2004, ranged from 53 percent to 71 percent. Although response rates within these ranges are not unexpected for this kind of telephone survey, it should be noted that as the response rate decreases, the certainty that the survey results represent the universe decreases. We determined that the survey data are sufficiently reliable for providing a general indication of customer satisfaction, for the specified periods of administration. We conducted our work at SSA headquarters, Baltimore, Maryland; at regional offices in Birmingham, Alabama; Kansas City, Missouri; and Richmond, California; and at two call centers in each region. We conducted our work from September 2004 through July 2005 in accordance with generally accepted government auditing standards. The following individuals made important contributions to this report: Shelia Drake, Assistant Director, Jacquelyn Stewart, Analyst-in-Charge, Susan Bernstein, Michelle Fejfar, Jonathan McMurray, and Roger Thomas. | The Social Security Administration (SSA) at some point touches the life of nearly every American. Each day thousands of people contact SSA to file claims, update records, and request information from its 1,300 field offices, website, and national toll-free 800 number. Implemented nationwide in 1989, SSA's 800-number has become a principal contact point for millions of individuals seeking agency services. Congressional requesters asked GAO to review the quality of SSA's 800 number in terms of caller access and agent accuracy of response and courtesy. Despite making improvements to its 800-number service, SSA still has difficulty keeping pace with caller demand for agent assistance. In 2001, SSA upgraded its 800-number network so that all callers could either access its automated services or be routed to the next available agent at any site in the network--a feat not possible under the previous system. The new network also enhanced SSA's ability to monitor and manage call traffic, agent availability, and network operations in real-time to ensure the network's integrity and the consistent delivery of services. SSA also expanded its automated and agent-assisted services accessible through the 800-number network. However, SSA's expansion of its automated services to reduce agent call burden has not had its intended effect, as callers continue to show a strong preference for agent assistance. In fiscal year 2004, about 51 million of the more than 71 million callers requested to speak to an agent. However, 8.7 million, or 17 percent, of these calls did not get through to an agent--a 2 percentage point increase over the previous year. SSA has taken steps to help agents provide callers with accurate information and consistent services, but still has problems with agents assisting callers in line with agency policies and procedures. SSA's training curriculum provides agents with a comprehensive overview of SSA programs. Agents are also encouraged to use available on-the-job resources, including a customized computer application that helps agents provide consistent service and accurate responses. Nevertheless, from 2001 through 2003, SSA did not meet its 90 percent target for service accuracy--that is, agents' performance in handling non-payment related issues in accordance with agency requirements. Although SSA has taken several actions to help agents improve their performance, including mandating agent use of the computer application, it has not yet determined why agent compliance with agency policies continues to fall short. SSA trains and monitors agents for courtesy and conducts periodic customer satisfaction surveys, but does not routinely capture all customer complaints about alleged agent discourtesy. Agents receive training on developing their interviewing and interpersonal skills, and SSA monitors agents to determine whether or not they are providing courteous service to callers. SSA monitoring indicates that agent courtesy levels are high. SSA solicits limited customer feedback on agent courtesy in its annual surveys and compiles general ratings, but these surveys do not ask callers for the reasons behind the ratings. Callers to the 800 number do complain of agent discourtesy, but SSA does not routinely document and assess all complaints. Some call center staff told us that when they receive allegations of agent discourtesy, they typically apologize for the discourteous service and may proceed to assist the caller without recording the complaint. SSA has feedback mechanisms in place to capture caller complaints, but these mechanism do not do so in a manner that allows SSA to assess complaints and identify corrective actions needed. |
Since 2012, the government has made efforts to improve real property management. As we reported in 2016, the Office of Management and Budget (OMB) issued government-wide guidance—the National Strategy for the Efficient Use of Real Property— in 2015, which aligns with many of the desirable characteristics of effective national strategies that GAO has identified, including describing the purpose, defining the problem, and outlining goals and objectives. We concluded that the strategy is a major step forward that could help agencies strategically manage real property by establishing a government-wide framework for addressing real property challenges. Prior to issuing the National Strategy, OMB issued a 2012 Freeze the Footprint policy and subsequently issued its 2015 Reduce the Footprint policy, which directs agencies to, respectively, restrict growth and take action to reduce square footage in their real estate inventory. As part of the implementation of these policies, agencies were required to submit a plan to OMB detailing how the agency intended to maintain or reduce the square footage of its real property inventory. We found that the agencies we reviewed in 2016 had outlined approaches to manage any growth in their portfolio, better utilize existing space, and identify and dispose of space no longer needed to support the agency’s mission. Despite this progress, significant challenges to managing real property in general and excess property in particular, remain. Lack of Reliable Data: A lack of reliable data makes it difficult to accurately measure the amount of excess property. As we reported in 2015, this undermines efforts to effectively reform real property management and to judge progress in addressing the associated challenges. The data used to manage the government’s real property, the Federal Real Property Profile (FRPP), are unreliable due to challenges with the accuracy and consistency of data reported by federal agencies. For instance, in 2014, we reported that GSA’s interpretation of utilization definitions (interpreting the terms unutilized and underutilized to apply only to properties in the disposal process) leads GSA to identify nearly all of its warehouses as utilized despite, in some cases we identified, being vacant for as long as 10 years. Additionally, in 2015, we found that the federal government’s reported results from the Freeze the Footprint policy for fiscal year 2012 were overstated. Many reported reductions from the four agencies we reviewed were the result of actions other than actual space reduction, such as the re-categorization of space to another use or data errors. While we found in March 2016 that OMB and GSA have taken positive steps such as issuing guidance and implementing data validation procedures to improve the quality of FRPP data, we also found that GSA had not analyzed agencies’ collection or reporting practices or the limitations of the data. Certain key FRPP data elements, such as utilization status, continue to be inconsistently reported by agencies. As a result, we concluded that FRPP data may not fully reflect the extent of real property challenges faced by agencies or the progress they have made in addressing challenges in these areas. Furthermore, we found that the current lack of transparency regarding how agencies collect and report FRPP data increases the risk of using the data to guide decision-making, thereby limiting the data’s usefulness. We made several recommendations, which I will discuss later in my testimony, for improving the reliability of these data. Complex Disposal Process: Legal requirements can make the property disposal process lengthy and complicated. As the federal government’s property disposal agent, GSA follows a prescribed process for the disposal of federal properties reported as excess by federal agencies. This process includes requirements that the property be screened first for potential use by other federal agencies, then by homeless providers and state and local governments for other public uses. However, we found in 2011 that this process can be challenging for federal agencies. For example, the McKinney-Vento Homeless Assistance Act requires the federal government to go through a screening process for excess, surplus, underutilized, and unutilized properties for suitability for homelessness services. We found that as of March 2014, at least 40,000 properties were screened under the Act but only 81 of them were being used by homelessness assistance providers. Requirements associated with the National Historic Preservation Act can also present a challenge. For example, VA officials we spoke to for a 2012 report told us that they were unable to demolish a 15,200-square-foot building at Menlo Park, California, that has been used as both a residence and a research building during its 83-year history. The building had been scheduled for demolition since 2001, but VA could not demolish it because of an historical designation. Costly environmental requirements: Agency disposal costs can outweigh the financial benefits of property disposal. Environmental requirements provide that necessary environmental remediation be completed prior to disposing of a property. However, as we found in 2012, the required environmental assessments and remediation can be expensive and time-consuming. For example, the Department of Energy (DOE) is responsible for remediation of contaminated nuclear weapons manufacturing and testing sites that include thousands of excess buildings contaminated with radiological or chemical waste. In June 2012, we reported that DOE officials told us that because their decontamination and disposal funds are limited, they might not be able to dispose of these buildings for many years. In addition, in 2014 we reported that officials from the Departments of Energy and Interior told us that in many cases the cost of cleanup of old warehouses outweighs the potential sale or salvage price. Competing Stakeholder interests: Stakeholder interests can conflict with property disposal or reuse plans. We found in 2012 that—in addition to Congress, OMB, and real property holding agencies— several other stakeholders have an interest in how the federal government carries out its real property acquisition, management, and disposal practices. These stakeholders may include state, local, and tribal governments; business interests in the local communities; historic preservation groups; and the general public. For example, in the case of VA, veterans’ organizations have had an interest in being consulted on plans to reuse or demolish VA’s historic buildings and on how those plans affect the services provided to veterans. In cases like these, final decisions about a property may reflect competing interests and broader stakeholder considerations that may not align with what an agency views as the most cost effective or efficient alternative for a property. Limited Accessibility of Federal Properties: As we found in 2012, the locations of some federal properties can make property disposal difficult. For example, because DOE must locate buildings in remote areas that include acreage that can serve as security and environmental buffer zones for nuclear-related activities, officials reported that they demolish most excess buildings rather than resell or reuse them. Similarly, Interior officials reported that most of their buildings are located on public domain lands, lands held in trust, or in remote or inaccessible areas, and VA officials reported that most of their buildings are located on medical center campuses. Because these buildings may not be easily accessible, sales or conveyances of these buildings can be challenging. For example, in 2014 we found that almost 80 percent of excess properties identified by the Department of Housing and Urban Development as suitable and available for public conveyance for homeless assistance were available for off-site use only—meaning that a homeless assistance provider would need to physically move the building in order to use it, which may not be feasible or worth the cost to homeless assistance providers. As discussed above, issues with the reliability of FRPP data—particularly the utilization variable—make it difficult to quantify the overall number of vacant and underutilized federal buildings. However, we have reported on some vacant properties in the Washington, D.C., area that illustrate the challenges associated with disposing of or repurposing vacant property. The Cotton Annex: This building, controlled by GSA as the federal government’s property disposal agent and located just a couple blocks off the National Mall in Washington, D.C., is approximately 118,000 gross square feet and has been vacant since 2007 (see fig. 1). We found in 2016 that GSA’s recent attempt to exchange the property for construction services failed when GSA was unable to obtain sufficient value from the exchange, making the fate of this unneeded building unclear. St. Elizabeths: The west campus of St. Elizabeths, a National Historic landmark in Washington, D.C., is comprised of 61 buildings on about 182 acres (see fig. 2). Many buildings have been vacant for extended periods of time and are in badly deteriorated condition. As we reported in 2014, GSA developed a plan to establish a consolidated headquarters for the Department of Homeland Security on the site in 2009. Since then, GSA has completed construction of a new headquarters building for the Coast Guard, but most of the project has been delayed. The estimated timeline for completing the project has been extended multiple times, from an initial estimated completion date of 2016, to an estimated completion date of 2021 based on a scaled back plan as of 2015. As discussed below, we made recommendations for addressing these issues. GSA Warehouses: In 2014, we found that some GSA warehouses listed in FRPP as utilized had been vacant for as long as 10 years. GSA only lists warehouses as unutilized if they are already in the disposal process. This interpretation of utilization in FRPP caused GSA to list as utilized some warehouses that had been vacant for years. For example, see figure 3. We made a recommendation, discussed below, for improving GSA’s management of its warehouses. In recent years, we have made recommendations to GSA and other federal agencies that, if implemented, would increase the federal government’s capacity to manage its portfolio and document any progress of reform efforts. The Comptroller General highlighted our highest-priority recommendations to GSA in an August 1, 2016, letter to the GSA Administrator. Of the six open recommendations, the letter included the following three related to excess and underutilized property In April 2016, we recommended that, to improve the quality and transparency of FRPP data, GSA, along with OMB and federal agencies, (1) assess the reliability of the data by determining how individual agencies collect and report data for each field, (2) analyze the differences in collecting and reporting practices used by these agencies, and (3) identify and make available to users the limitations of using FRPP data. GSA and OMB partially agreed with our recommendation, but GSA noted that it is the responsibility of individual agencies to ensure reliability of the data and compliance with FRPP definitions. OMB also noted that FRPP data are currently only being used by the individual agencies entering the data, and that the data are reliable for (and the limitations known by) the individual agencies. GSA has taken some action to implement the recommendation, including collecting information on individual agencies’ internal guidance and the processes used to collect data. In June, 2016 GSA staff briefed us on additional steps they are taking to improve FRPP’s usefulness as an analytical management tool. We are currently assessing the reliability of the federal government’s fiscal year 2014 property disposal statistics. In November 2014, we recommended that GSA articulate a strategy for its role in promoting effective and efficient warehouse management practices across the federal government, a process that could include developing and disseminating warehouse management guidance and supporting agencies as they assess their warehouse portfolios. GSA agreed with our recommendation and is taking steps to implement it. Specifically, GSA has created an online resource page on Warehouse Asset Management Best Practices and, according to GSA officials, is in the process of developing a Guide for Strategic Warehouse Planning, which GSA plans to complete in 2016. In September 2014, we recommended that GSA and DHS work jointly with regard to the DHS headquarters project on the St. Elizabeths campus, to (1) conduct a comprehensive needs assessment and gap analysis of current and needed capabilities and an alternatives analysis that identifies the costs and benefits of leasing and construction alternatives and (2) update cost and schedule estimates for the remaining portions of the project. According to agency documents and our interviews with DHS and GSA officials, DHS and GSA have made progress in developing an enhanced plan for the project. In March 2015, DHS issued its National Capitol Region Real Property Strategic Plan: Business Case Analysis, which outlines a revised construction plan for the St. Elizabeths campus as well as updated workplace standards for the department. Additionally, according to GSA officials and agency documents, GSA is leading efforts to revise the project’s cost and schedule estimates that take into account GAO’s leading cost-estimation practices. However, this recommendation remains open until these efforts are completed and the results assessed. We continue to monitor the implementation of these and our other real property recommendations. Finally, several real property reform bills have been introduced in Congress that could address the long-standing problem of federal excess and underutilized property. For example, the Federal Assets Sale and Transfer Act of 2016 could help address stakeholder influence by establishing a Public Buildings Reform Board to identify opportunities for the federal government to significantly reduce its inventory of civilian real property and reduce its costs. Additionally, the Public Buildings Report and Savings Act of 2016 would promote consolidations and disposals by requiring, among other things, that GSA (1) justify to Congress any new or replacement building space in the prospectus, including reasons that it cannot be consolidated or collocated into other owned or leased space and (2) dispose of specific properties in Washington, D.C., including the Cotton Annex. Although both bills have passed the House of Representatives, neither one has been enacted yet. Chairman Mica, Ranking Member Duckworth, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For further information regarding this testimony, please contact David Wise at (202) 512-2834 or wised@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Keith Cunningham (Assistant Director), Katie Hamer (Analyst in Charge), Luqman Abdullah, David Lutter, Sara Ann Moessbauer, Josh Ormond, Michelle Weathers, and Crystal Wesco. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In 2003, GAO added “Federal Real Property” to GAO's biennial High-Risk list, in part, due to long-standing challenges federal agencies face in managing federally owned real property, including disposal of excess and underutilized property. Continuing to maintain these unneeded facilities puts the government at risk for wasting resources due to ongoing maintenance costs as well as lost revenue from failing to sell excess property. Despite implementing policies and systems that may help federal agencies manage real property, the federal government continues to maintain excess and underutilized property. In fiscal year 2015, federal agencies reported over 7,000 excess or underutilized real property assets. This testimony addresses (1) efforts by the federal government to address excess and underutilized properties since 2012, (2) long-standing challenges to managing and disposing of federal real property and (3) potential solutions to address these long-standing challenges. This statement summarizes the results of a number of previous GAO reports on real property utilization and management that were issued from 2011 through 2016. GAO also included some updates based on follow-up, conducted on the status of GAO's recommendations in 2015 and 2016. Since 2012, the administration has taken steps to reform real property management and address the long-standing challenge of reducing excess and underutilized property. For example, in 2015, the Office of Management and Budget (OMB) issued government-wide guidance—the National Strategy for the Efficient Use of Real Property—which GAO found in 2016 could help agencies strategically manage real property. However, GAO's work has found that significant challenges persist in managing real property in general and excess and underutilized property in particular. They include: a lack of reliable data with which to measure the extent of the problem, a complex disposal process, costly environmental requirements, competing stakeholder interests, and limited accessibility of some federal properties. Properties in the Washington, D.C., area such as the Cotton Annex building, vacant General Services Administration (GSA) warehouses, and buildings on the St. Elizabeths campus (pictured below) illustrate the challenges for disposal and re-utilization of vacant federal buildings. For example, GAO found in 2014 that real property data indicated some GSA warehouses were utilized when they had been vacant for as long as 10 years. In addition to the steps already taken by the administration, further action by federal agencies to implement GAO's previous recommendations could help to address some of these challenges. For example, GAO has made recommendations to GSA and other federal agencies that, if implemented, would increase the federal government's capacity to manage its portfolio and document the progress of reform efforts. GAO highlighted its highest priority open recommendations to GSA in an August 2016 letter to GSA. Among those are three recommendations related to excess and underutilized property, including a recommendation to assess the reliability of data collected and entered into GSA's Federal Real Property Profile database by individual federal agencies. Additionally, real property reform bills that could address the long-standing problem of federal excess and underutilized property have been introduced in Congress. Specifically, two bills have been passed by the House of Representatives in 2016, but neither has been enacted yet. |
Three years ago when I appeared before this Committee, I spoke about a large and growing long-term fiscal gap driven largely by known demographic trends and rising health care costs. Unfortunately, despite a brief period with budget surpluses, that gap has grown much wider. Last year’s Medicare prescription drug bill was a major factor, adding $8.1 trillion to the outstanding commitments and obligations of the U.S. government in long-term present value terms. The near-term deficits also reflected-higher Defense, homeland security, and overall discretionary spending which exceeded growth in the economy, as well as revenues which have fallen below historical averages due to policy decisions and other economic and technical factors. While the size of the nation’s long- term fiscal imbalance has grown significantly, the retirement of the “baby boom” generation has come closer to becoming a reality. Given these and other factors, it is clear that the nation’s current fiscal path is unsustainable and that tough choices will be necessary in order to address the growing imbalance. The cost implications of the baby boom generation’s retirement have already become a factor in CBO’s baseline projections and will only intensify as the baby boomers age. According to CBO, total federal spending for Social Security, Medicare, and Medicaid is projected to grow by about 25 percent over the next 10 years—from 8.4 percent of GDP in 2004 to 10.4 percent in 2015. Although the Trustees’ 2004 intermediate estimates project that the combined Social Security Trust Funds will be solvent until 2042,program spending will constitute a rapidly growing share of the budget and the economy well before that date. Under the Trustees’ 2004 intermediate estimates, Social Security’s cash surplus—the difference between program tax income and the costs of paying scheduled benefits—will begin a permanent decline beginning in 2008. To finance the same level of overall federal spending as in the previous year, additional revenues and/or increased borrowing will be needed in every subsequent year. (See fig. 1.) By 2018, Social Security’s cash income (tax revenue) is projected to fall below benefit payments. At that time, Social Security will join Medicare’s Hospital Insurance Trust Fund, whose outlays exceeded cash income in 2004, as a net claimant on the rest of the federal budget. The combined OASDI Trust Funds will begin drawing on the Treasury to cover the cash shortfall, first relying on interest income and eventually drawing down accumulated trust fund assets. At this point, Treasury will need to obtain cash for those redeemed securities either through increased taxes, spending cuts, and/or more borrowing from the public than would have been the case had Social Security’s cash flow remained positive. Ultimately, the critical question is not how much a misleadingly labeled “trust fund” has in assets, but whether the government as a whole can afford the benefits in the future and at what cost to other claims on scarce resources. As I have said before, the future sustainability of programs is the key issue policy makers should address—that is, the capacity of the economy and budget to afford the commitment in light of overall current and projected fiscal conditions. GAO’s long-term simulations illustrate the magnitude of the fiscal challenges associated with an aging society and the significance of the related challenges the government will be called upon to address. Figures 2 and 3 present these simulations under two different sets of assumptions. In the first, we begin with CBO’s January baseline—constructed according to the statutory requirements for that baseline. Consistent with these requirements, discretionary spending is assumed to grow with inflation for the first 10 years and tax cuts scheduled to expire are assumed to expire. After 2015, discretionary spending is assumed to grow with the economy, and revenue is held constant as a share of GDP at the 2015 level. In the second figure, two assumptions are changed: (1) discretionary spending is assumed to grow with the economy after 2005 rather than merely with inflation, and (2) the tax cuts are extended. For both simulations, Social Security and Medicare spending is based on the 2004 Trustees’ intermediate projections, and we assume that benefits continue to be paid in full after the trust funds are exhausted. Medicaid spending is based on CBO’s December 2003 long-term projections under mid-range assumptions. As both these simulations illustrate, absent policy changes on the spending or revenue side of the budget, the growth in spending on federal retirement and health entitlements will encumber an escalating share of the government’s resources. Indeed, when we assume that recent tax reductions are made permanent and discretionary spending keeps pace with the economy, our long-term simulations suggest that by 2040 federal revenues may be adequate to pay little more than interest on the federal debt. Neither slowing the growth in discretionary spending nor allowing the tax provisions to expire—nor both together—would eliminate the imbalance. Although revenues will be part of the debate about our fiscal future, making no changes to Social Security, Medicare, Medicaid, and other drivers of the long-term fiscal gap would require at least a doubling of taxes—and that seems implausible. Accordingly, substantive reform of Social Security and our major health programs remains critical to recapturing our future fiscal flexibility. Although considerable uncertainty surrounds long-term budget projections, we know two things for certain: the population is aging and the baby boom generation is approaching retirement age. The aging population and rising health care spending will have significant implications not only for the budget, but also the economy as a whole. Figure 4 shows the total future draw on the economy represented by Social Security, Medicare, and Medicaid. Under the 2004 Trustees’ intermediate estimates and CBO’s long- term Medicaid estimates, spending for these entitlement programs combined will grow to 15.6 percent of GDP in 2030 from today’s 8.5 percent. It is clear that, taken together, Social Security, Medicare, and Medicaid represent an unsustainable burden on future generations. The government can help ease future fiscal burdens through spending reductions or revenue actions that reduce debt held by the public, saving for the future, and enhancing the pool of economic resources available for private investment and long-term growth. Economic growth is essential, but we will not be able to simply grow our way out of the problem. The numbers speak loudly: Our projected fiscal gap is simply too great. Closing the current long-term fiscal gap would require sustained economic growth far beyond that experienced in U.S. economic history since World War II. Tough choices are inevitable, and the sooner we act the better. The retirement of the baby boom generation is not the only demographic challenge facing our nation. People are living longer and spending more time in retirement. As shown in figure 5, the U.S. elderly dependency ratio is expected to continue to increase. The proportion of the elderly population relative to the working-age population in the U.S. rose from 13 percent in 1950 to 19 percent in 2000. By 2050, there is projected to be almost 1 elderly dependent for every 3 people of working age—a ratio of 32 percent. Additionally, the average life expectancy of males at birth has increased from 66.6 in 1960 to 74.3 in 2000, with females at birth experiencing a rise from 73.1 to 79.7 over the same period. As general life expectancy has increased in the United States, there has also been an increase in the number of years spent in retirement. A falling fertility rate is the other principal factor underlying the growth in the elderly’s share of the population. In the 1960s, the fertility rate was an average of 3 children per woman. Today it is a little over 2, and by 2030 it is expected to fall to 1.95. The combination of these factors means that annual labor force growth will begin to slow after 2010 and by 2025 is expected to be less than a fifth of what it is today. (See fig. 6.) Thus, relatively fewer workers will be available to produce the goods and services that all will consume. Lower labor force growth will lead to slower growth in the economy and to slower growth of federal revenues. This in turn will only accentuate the overall pressure on the federal budget. Increased investment could increase the productivity of workers and spur economic growth. However, increasing investment depends on national saving, which remains at historically low levels. Historically, the most direct way for the federal government to increase saving has been to reduce the deficit (or run a surplus). Although the government may try to increase personal saving, results of these efforts have been mixed. For example, even with the preferential tax treatment granted since the 1970s to encourage retirement saving, the personal saving rate has steadily declined. (See fig. 7.) Even if the economic growth increases, the structure of retirement programs and historical experience in health care cost growth suggest that higher economic growth results in a generally commensurate growth in spending for these programs over the long run. In recent years, personal saving by households has reached record lows, while at the same time the federal budget deficit has climbed. Accordingly, national saving has diminished, but the economy has continued to grow, in part because more and better investments were made. That is, each dollar saved bought more investment goods, and a greater share of saving was invested in highly productive information technology. The economy has also continued to grow because the United States was able to invest more than it saved by borrowing abroad, that is, by running a current account deficit. However, a portion of the income generated by foreign-owned assets in the United States must be paid to foreign lenders. National saving is the only way a country can have its capital and own it too. The persistent U.S. current account deficits of recent years have translated into a rising level of indebtedness to other countries. However, many other nations currently financing investment in the United States also will face aging populations and declining national saving, so relying on foreign savings to finance a large share of U.S. domestic investment or federal borrowing is not a viable strategy for the long run. In general, saving involves trading off consumption today for greater consumption tomorrow. Our budget decisions today will have important consequences for the living standards of future generations. The financial burdens facing the smaller cohort of future workers in an aging society would most certainly be lessened if the economic pie were enlarged. This is no easy challenge, but in a very real sense, our fiscal decisions affect the longer-term economy through their effects on national saving. Early action to change these programs would yield the highest fiscal dividends for the federal budget and would provide a longer period for prospective beneficiaries to make adjustments in their own planning. Waiting to build economic resources and reform future claims entails risks. First, we lose an important window during which today’s relatively large workforce can increase saving and enhance productivity, two elements critical to growing the future economy. We also lose the opportunity to reduce the burden of interest in the federal budget, thereby creating a legacy of higher debt as well as elderly entitlement spending for the relatively smaller workforce of the future. Most critically, we risk losing the opportunity to phase in changes gradually so that all can make the adjustments needed in private and public plans to accommodate this historic shift. Unfortunately, the long-range challenge has become more difficult, and the window of opportunity to address the entitlement challenge is narrowing. Although Social Security, Medicare, and Medicaid drive the long-term outlook, they are not the only federal programs or activities in which the federal government has made long-term commitments. At GAO, we are in the truth, transparency, and accountability business. A crucial first step is to insist on truth and transparency in government operations, including federal financial reporting, budgeting, and legislative deliberations. The federal government must provide a fuller and fairer picture of existing budget deficits, the misnamed “trust funds,” and the growing financial burdens facing every American, especially younger Americans. On the budget side, the current 10-year cash-flow projections are an improvement over past practices. But given known demographic trends, even these projections fail to capture the long-term consequences of today’s spending and tax policy choices. In my view, elected representatives should have more explicit information on the present value dollar costs of major spending and tax bills—before they vote on them. We believe that members of Congress, the President, and the public should have information about any long-term commitments embodied in a current policy decision. Some years ago, we developed the term “fiscal exposures” to provide a conceptual framework for considering the wide range of responsibilities, programs, and activities that may explicitly or implicitly expose the federal government to future spending. Fiscal exposures vary widely as to source, extent of the government’s legal obligation, likelihood of occurrence, and magnitude. They include not only liabilities, contingencies, and financial commitments that are identified on the balance sheet or accompanying notes, but also responsibilities and expectations for government spending that do not meet the recognition or disclosure requirements for that statement. By extending beyond conventional accounting, the concept of fiscal exposure is meant to provide a broad perspective on long-term costs and uncertainties. Fiscal exposures include items such as retirement benefits, environmental cleanup costs, the funding gap in Social Security and Medicare, and the life cycle cost for fixed assets. Given this variety, it is useful to think of fiscal exposures as lying on a spectrum extending from explicit liabilities to the implicit promises embedded in current policy or public expectations. Table 1 shows some selected fiscal exposures. As currently structured, these fiscal exposures constitute significant and in many cases growing encumbrances on the budgetary resources of the future. The current budget projections primarily focus attention on the 5- to10-year budget window. While this is an important and appropriate frame for assessing the impacts of federal fiscal policy on the economy, longer- term estimates and projections can also help provide important perspective. At the macro level, the long-term fiscal models we and CBO have developed should help frame the near-term choices we face by bringing in information on their long-term impact. At the micro level, better information on the longer-term costs of selected exposures—particularly those scheduled to grow rapidly—can help focus attention on those program commitments presenting significant fiscal burdens over the longer term. For example, in considering the prescription drug legislation, much controversy was focused on the specific 10-year cost estimate that should be used in the congressional consideration of this new entitlement. However, comparatively little attention was paid to the long-term costs that this new commitment would pose for future generations over a 75-year period—$8.1 trillion in present value terms, net of premiums. Since the full costs of this new entitlement increase significantly over the longer term, decision makers need to be better informed about the growth path and the impact on the nation’s finances beyond the 10-year window. The President and the Congress face the challenge of sorting out the many claims on the federal budget without the budget enforcement mechanisms—discretionary spending caps and pay-as-you-go (PAYGO) discipline—or fiscal benchmarks that guided the federal government through the years of deficit reduction into a brief period of federal surpluses. While a number of steps will be necessary to address this challenge, truth and transparency in financial reporting and budgeting are essential elements of any attempt to address the nation’s long-term fiscal challenges. The fiscal risks can be managed only if they are properly accounted for and publicly disclosed, including the many existing commitments facing the government. In addition, new budget control mechanisms will be required. So what can we do to frame information and decisions so that decision makers can appropriately focus on fiscal exposures? The variety of certainties—and uncertainties—associated with fiscal exposures means that no single approach to increasing attention to them will work in all cases. Instead, targeted approaches for different types of fiscal exposures would, I think, be most useful for incorporating a longer-term perspective into the budget. Changes in the information provided, the budget process, or budgetary incentives could be tailored selectively for different categories of fiscal exposures to improve transparency, prompt more deliberation about them, or improve budgetary incentives to address them. Several approaches that could be used, depending on the type of program and information available, are include fiscal exposures in the budget process, and include fiscal exposures in budget data. Figure 8 shows these alternative approaches and relates them to the primary objective that each could help achieve. For example, approach III, in which fiscal exposure cost estimates are incorporated directly into budget data, would help achieve the objective of improving budgetary incentives to address the fiscal exposures. Each approach could be implemented in a number of ways, which I will briefly discuss. Approaches depend upon which primary objective is sought. A number of options could be used to implement each approach. Improved supplemental reporting on fiscal exposures would make information more accessible to decision makers without introducing additional uncertainty and complexity directly into the budget. Estimates of the government’s exposures would be reported in various budget documents, but the current basis of reporting primary budget data—budget authority, obligations, outlays, and deficit/surplus—would not be changed. In some cases, improving supplemental reporting may simply be a matter of highlighting or expanding existing analytical work, such as continuing and improving long-range projections and simulations of the budget as a whole. Other ways of providing additional supplemental information could be special analyses for certain significant fiscal exposures in the Analytical Perspectives of the budget or an annual report on fiscal exposures prepared by OMB. In the congressional budget process, greater focus could center on the long-term net present value of proposed new commitments for items where the 10-year estimate does not fully capture the dimensions of cost growth expected, similar to the Medicare prescription drug bill I mentioned earlier. But another idea that we have discussed in the past is to routinely report the future estimated costs of certain exposures as a separate notational line in the budgetary schedules in the President’s budget. For example, an estimate of the future operating and maintenance costs associated with capital acquisitions could be reported as the “exposure level” for capital accounts that include the initial capital acquisition costs. Similarly, the future funding needs associated with incrementally funded projects could be included with the budget account that includes the capital acquisition. And future environmental cleanup costs associated with an asset acquisition could be handled the same way. The exposure levels might be reported in present value terms. Including them as part of the budget presentations at the account level would make such information available along with the initial costs rather than in an additional document and would clearly show the potential future costs associated with current decisions. Budget process changes would go beyond simply providing more information on fiscal exposures to establishing opportunities for explicit consideration of these exposures. The Congress could modify budget rules to provide for a point of order against any proposed legislation that creates new exposures or increases the estimated costs of existing exposures over some specified level. Or, revised rules could provide for a point of order against any proposed legislation that does not include estimates of the potential costs of fiscal exposures created by the legislation. A different budget process approach would be to establish triggers that address the growth in existing exposures. In that case, triggers would be established to signal when future costs of exposures rise above a certain level. Reaching the trigger would require some action. For example, the Medicare drug law enacted in December 2003 requires the Medicare trustees to estimate the point at which general revenues will finance at least 45 percent of Medicare costs. If two consecutive trustee reports estimate that this level will be reached within the next 6 years, the President is required to include a proposal in his next budget and submit legislation to change Medicare so that the 45 percent threshold will not be exceeded. Congressional committees must then report Medicare legislation by June 30. Like points of order, a trigger would require explicit consideration of exposures facing the government without adding uncertainty to primary budget data. Incorporating estimated future costs of fiscal exposures directly into budget data by using accrual-based costs would represent the greatest change of the three approaches I have outlined today. Accrual-based costs could be used to measure budget authority and outlays for select programs when doing so would enhance obligation-based control. This approach is most suitable for explicit exposures for which reasonable cost estimates are available. For some time we have advocated the selective use of accrual measures in the budget to better reflect costs at the time decisions are made. For some major exposures, such as employee retirement benefits, insurance, and environmental clean-up costs, the use of accrual-based measurement would result in earlier cost recognition. This earlier recognition of costs improves information available to decision makers about the costs associated with current decisions and may improve the incentives to manage these costs. Because the future costs of some exposures are dependent upon many economic and technical variables that cannot be known in advance, there will always be uncertainty in cost estimates. Such uncertainty makes using accrual-based measurement directly in the budget more difficult. It may make sense for some exposures but not for others, because the certainty of the government’s commitment and the availability of reasonable, unbiased estimates vary across the different fiscal exposures. As I noted earlier, nothing less than a fundamental review, reexamination, and reprioritization of all major spending and tax policies and programs is needed. We at GAO believe we have an obligation to assist and support you in this endeavor. So I would like to take some time this morning to tell you more about the report we will soon be issuing on reexamining the base of government—both to tell you why we are issuing this report and to illustrate some of the specific questions we plan to raise. Having identified the large and growing fiscal challenges facing the nation and the other major trends and challenges facing the United States as outlined in our strategic plan for serving the Congress, we thought we should look to our work and provide examples of the kinds of hard choices stemming from those challenges—in the form of questions for policy makers to consider. These 21st century questions will cover discretionary spending; mandatory spending, including entitlements; as well as tax policies and programs—all in one accessible volume. Mr. Chairman, we are talking about a major transformational challenge that may take a generation to resolve. Traditional incremental approaches to budgeting will need to give way to more fundamental and periodic reexaminations of the base of government. Many, if not most current federal programs and policies were designed decades ago to respond to trends and challenges that existed at the time of their creation. If government is to respond effectively to 21st century trends, it cannot accept what it does, how it does it, who does it, and how it gets financed as “given.” Not only do outmoded commitments, operations, choices of tools, management structures, and tax programs and policies constitute a burden on future generations, but they also erode the government’s capacity to align itself with the needs and demands of the 21st century. Confronting the fiscal imbalance would be difficult enough if all we had to do is fund existing commitments. But a wide range of emerging needs and demands can be expected to compete for a share of the budget pie. Whether it be national or homeland security, transportation or education, environmental cleanup or public health, a society with a growing population—and ours is projected to grow by about 50 percent by the middle of the 21st century—will generate new demand for federal action on both the spending and tax sides of the budget. Reexamining older programs and operations may enable us to free up resources to address some of these emerging needs. The specific 21st century questions were developed based on GAO’s strategic plan, which identified major trends that will shape the federal role in the economy and our society going forward. (See table 2.) These trends, along with GAO’s institutional knowledge and issued work, helped us identify the major challenges and specific questions. The specific questions were informed by a set of generic evaluation criteria useful for reviewing any government program or activity, which are displayed in table 3. In the report, we will describe the forces at work, the challenges they present, and the 21st century questions they prompt, in each of 12 broad areas based in large measure on functional areas in the federal budget, but also including governmentwide issues and the revenue side of the budget. Table 4 lists those 12 areas, which involve discretionary spending; mandatory spending, including entitlements; and tax policies and programs—all of them are a part of the base. Our forthcoming report contains over 200 individual illustrative questions in these 12 areas. But today I would like to highlight for you—to give you a flavor of what the report will contain—several of the challenges we have inventoried in 4 of these areas, as well as some of the questions those challenges prompt. In the past 15 years, the world has experienced dramatic changes in the overall security environment, with the focus shifting away from conventional threats posed during the Cold War era to more unconventional and asymmetric threats evidenced by the events of September 11, 2001. Concerns about the affordability and sustainability of the rate of growth in defense spending will likely prompt decision makers to reexamine fundamental aspects of the nation’s security programs, such as how DOD plans and budgets; organizes, manages, and positions its forces; acquires new capabilities; and considers alternatives to past approaches. To successfully carry out this reexamination, DOD must overcome cultural resistance to change and the inertia of various organizations, policies, and practices that became well rooted in the Cold War era. While DOD has taken steps to meet short term operational needs, it still faces the fundamental challenge of determining how it will meet the longer term concerns of reorganizing its forces and identifying the capabilities it will need to protect the country from current, emerging, and future conventional and unconventional security threats. As DOD seeks to meet the demands of the new security environment, it continues to bear the costs of the past by maintaining or continuing to pursue many of the programs and practices from the Cold War era. Moreover, DOD faces serious and long-standing challenges in managing its ongoing business operations. Complicating its efforts are numerous systems problems and a range of other long-standing weaknesses in the key business areas of strategic planning and budgeting, human capital management, infrastructure, supply chain management, financial management, information technology, weapon systems acquisition, and contracting. In fact, DOD alone has 8 of the 25 items and shares in the 6 cross-cutting ones on our recently-issued high-risk list. One particular operational challenge involves managing large and growing military personnel costs, which comprise the second largest component of DOD’s total fiscal year 2005 budget. The growth in military personnel costs has been fueled, in part, by increases in basic pay, housing allowances, recruitment and retention bonuses, and other special incentive pays and allowances. Health care costs have grown to comprise a larger share of the budget, reflecting expanded health care provided to reservists and retirees. As the total and per capita cost to DOD for military pay and benefits grows, we need to reexamine whether DOD has the right pay and compensation strategies to sustain the total force in the future in a cost-effective manner. The foregoing challenges suggest certain key questions be considered by policy makers. How should the historical allocation of resources across services and programs be changed to reflect the results of a forward-looking comprehensive threat/risk assessment as part of DOD’s capabilities- based approach to determining defense needs? What economies of scale and improvements in delivery of support services would result from combining, realigning, or otherwise changing selected support functions (e.g., combat support, training, logistics, procurement, infrastructure, health care delivery)? How might DOD’s recruitment, retention, and compensation strategies, including benefit programs, be reexamined and revised to ensure that DOD maintains a total military and civilian workforce with the mix of skills needed to execute the national security strategy while using resources in a more targeted, evidence-based and cost-effective manner? The challenges facing retirement and disability programs are long-term, severe, and structural in nature. For example, Social Security faces a large and growing structural financing challenge. Social Security faces this long- term financing shortfall largely because of several concurrent demographic trends—namely, that people are living longer, spending more time in retirement, and having fewer children. Social Security could be brought into balance over the next 75 years through changes in the program and related benefits and/or taxes; however, ensuring the sustainability of the system beyond 75 years will require even larger changes. Beyond Social Security, our nation’s retirement and disability programs are further challenged by serious weaknesses that have become manifest in our nation’s private pension system. Despite sustained large federal tax subsidies, total pension coverage continues to hover at about half of the total private sector labor force. The number of traditional defined-benefit plans has been contracting for decades, and recently, plan terminations by bankrupt sponsors of large defined-benefit plans have threatened the solvency of the Pension Benefit Guaranty Corporation (PBGC), the federal agency that insures certain benefits under such plans. While growth in the number and coverage of defined contribution plans—where each worker has an individual account that receives contributions—has helped mitigate the decline of more traditional defined-benefit plans, these plans have also experienced problems. Policy makers will need to consider how best to encourage wider pension coverage and adequate and secure pension benefits, and how such pensions might best interact with any changes to the Social Security program. Meanwhile, federal disability programs, such as those at the Social Security Administration (SSA) and the Department of Veterans Affairs (VA), are challenged by significant growth over the past decade that is expected to surge even more as increasing numbers of baby boomers reach their disability-prone years. Federal disability programs remain mired in concepts from the past and are poorly positioned to provide meaningful and timely support for workers with disabilities. Advances in medicine and science have redefined what constitutes an impairment to work, and the nature of work itself has shifted toward service and knowledge-based employment—these developments need to be reflected in agencies’ eligibility and review processes. The mounting challenges faced by our national retirement and disability programs raise important questions. For example: How should Social Security be reformed to provide for long-term program solvency and sustainability while also ensuring adequate benefits and protection from disability (e.g., increase the retirement age, restructure benefits, increase taxes, and/or create individual accounts)? What changes should be made to enhance the retirement income security of workers while protecting the fiscal integrity of the PBGC insurance program? How can federal disability programs, and their eligibility criteria, be brought into line with the current state of science, medicine, technology, and labor market conditions? Overall health care spending doubled between 1992 and 2002 and is projected to nearly double again in the following decade to about $3.1 trillion. Despite consuming a significant share of the economy—over 15 percent of GDP—U.S. health outcomes lag behind other major industrialized nations. For example, the U.S. performs below par in infant mortality and life expectancy rates as well as premature and preventable deaths. At the same time, access to basic health care coverage remains an elusive goal for nearly 45 million Americans without insurance. Americans with good health insurance have access to advanced technology procedures and world-class health facilities, but clinical studies suggest that not all of this care is desirable or needed. Rising health costs are compelling both public and private payers to examine whether these procedures can continue to be financed without better accounting for their clinical effectiveness. Additional health care spending over time will draw resources away from other economic sectors and could have adverse economic implications for all levels of government, individuals, and other private purchasers of health care. Defining differences between needs, wants, affordability, and sustainability is fundamental to rethinking the design of our current health care system. In the past several decades, the responsibility for financing health care has shifted away from the individual patient. In 1962, nearly half—46 percent— of health care spending was financed by individuals. The rest was financed by a combination of private health insurance and public programs. By 2002, the amount of health care spending financed by individuals’ out-of-pocket spending—at the point of service—was estimated to have dropped to 14 percent. Tax preferences for insured individuals and their employers have also shifted some of the financial burden for private health care to all taxpayers. Tax preferences can work at cross-purposes to the goal of moderating health care spending. For example, the value of employees’ health insurance premiums are permitted to be excluded from the calculation of their taxable earnings and are also excluded from the employers’ calculation of payroll taxes for both themselves and their employees. These tax exclusions represent a significant source of foregone federal revenue. Public and private payers are experimenting with payment reforms designed to foster delivery of care that is clinically proven to be effective. Ideally, identifying and rewarding efficient providers and encouraging inefficient providers to emulate best practices will result in better value for the dollars spent on care. However, the challenge of implementing performance-based payment reforms, among other strategies, on a systemwide basis will depend on system components that are not currently in place nationwide—such as compatible information systems to facilitate the production and dissemination of medical outcome data, safeguards to ensure the privacy of electronic medical records, improved transparency through increased measurement and reporting efforts, and incentives to encourage adoption of evidence-based practices. These same system components would be required to develop medical practice standards, which could serve as the underpinning for effective medical malpractice reform. In meeting these pressing health care system challenges, the following questions might be considered. How can technology be leveraged to reduce costs and enhance quality while protecting patient privacy? How can health care tax incentives be designed to encourage employers and employees to better control health care costs? For example, should tax preferences for health care be designed to cap the health insurance premium amount that can be excluded from an individual’s taxable income? How can “industry standards” for acceptable care be established, and what payment reforms can be designed to bring about reductions in unwarranted medical practice variation? What can or should the federal government do to promote uniform standards of practice for selected procedures and illnesses? As I discussed earlier, the imbalance between federal revenues and expenditures, if allowed to persist long term, will affect economic growth and require greater scrutiny of both tax revenues and expenditures. The level and types of taxes have major impacts on the financing of government, as well as on the economy as a whole and on individual taxpayers, for both today and tomorrow. The success of our tax system hinges greatly on the public’s perception of its fairness and understandability. Compliance is influenced not only by the effectiveness of IRS’s enforcement efforts, but also by Americans’ attitudes about the tax system and the government. Disturbing recent polls indicate that about 1 in 5 respondents say it is acceptable to cheat on their taxes. Furthermore, the complexity of and frequent revisions to the tax system make it more difficult and costly for taxpayers who want to comply to do so, and for IRS to explain and enforce tax laws. Many argue that complexity creates opportunities for tax evasion—through vehicles such as tax shelters—which, in turn, motivate further changes and complexity in tax laws and regulations. The lack of transparency also fuels disrespect for the tax system and the government. Thus, a crucial challenge for reexamination will be to determine how we can best strengthen enforcement of existing laws to give taxpayers confidence that their friends, neighbors, and business competitors are paying their fair share. The growing complexity of the tax system stems in part from the extensive use of tax incentives to promote social and economic objectives. The tax system includes hundreds of billions of dollars in such incentives—the same magnitude as total discretionary spending—yet relatively little is known about the effectiveness of tax incentives in achieving the objectives intended by the Congress. Furthermore, as you know, tax incentives are off the radar screen for the most part and do not compete in the budget process. They are effectively “fully funded” before any discretionary spending is considered. Incentives for savings are a particular concern: Private sector savings are near historical lows, and government savings, due to federal budget deficits, are negative. In addition, these incentives are complex, and although the issue is not completely settled, research has suggested that the incentives often do not stimulate much, if any, net new saving by individuals. As far back as 1994, we have reported that tax incentives deserved more scrutiny. The debate about the future tax system is partly about whether the goals for the nation’s tax system can be best achieved using the current structure, which is heavily dependent on income taxes, or a fundamentally reformed structure, which might include more dependence on consumption taxes, a flatter rate schedule, and/or fewer tax preferences. Increasing globalization, which makes it easier to move assets, income, and jobs across international borders, is another motivator for the debate. As policy makers grapple with such issues, they will have to balance multiple objectives, such as economic growth, equity, simplicity, transparency, and administrability, while raising sufficient revenue to finance government spending priorities. The appropriate balance among these objectives may also be affected by (1) how, if at all, to take into account that, including both the employer and employee share, an estimated two-thirds of taxpayers would pay more in payroll taxes—which are levied to fund Social Security and Medicare benefits—than they would pay in income taxes in 2004 and (2) whether and how to tax wealth. Today’s pressing tax challenges raise important questions. For example: Given our current tax system, what tax rate structure is more likely to raise sufficient revenue to fund government and satisfy the public’s perception of fairness? Can we increase compliance with tax laws and reduce the need for IRS enforcement through greater use of withholding and information reporting? Could increased disclosure and penalties reduce the use of abusive tax shelters? Which tax incentives need to be reconsidered because they fail to achieve the objectives intended by the Congress, their costs outweigh their benefits, they duplicate other programs, or other more cost effective means exist for achieving their objectives? Should the basis of the existing system be changed from an income to a consumption base? Would such a change help respond to challenges posed by demographic, economic, and technological changes? How would such a change affect savings and work incentives? How would reforms address such issues as the impact on state and local tax systems and the distribution of burden across the nation’s taxpayers? Congress faces a challenge many would find daunting: the need to bring government and its programs in line with 21st century realities. This challenge has many related pieces: narrowing the long-term fiscal gap; adapting Social Security to meet the new demographic reality; tackling the challenge of health care access, cost and quality; deciding on the appropriate role and size of the federal government—and how to finance that government—and bringing the panoply of federal activities into line with today’s world. We believe that we at GAO have an obligation to assist and support the Congress in this effort. The reexamination questions discussed today and the forthcoming report of which they are a part are offered in that spirit: they are drawn primarily from the work GAO has done for the Congress over the years. We have attempted to structure questions that we hope you will find useful as you examine and act on problems that may not constitute an urgent crisis but pose important longer term threats to the nation’s fiscal, economic, security and societal future. Although it is not easy, the periodic reexamination of existing portfolios of federal programs can weed out ineffective or outdated programs while also strengthening and updating those programs that are retained. Such a process not only could address fiscal imbalances, but also improve the responsiveness, effectiveness, and credibility of government in addressing 21st century needs and challenges. Given the unsustainability of our current fiscal outlook, the real question is not whether we will deal with the fiscal imbalance, but how and when. Given the size of the long-term fiscal imbalances, all major spending and revenue programs in the budget should be subject to periodic reviews and reexamination. While it is important to consider the role and size of government, how we finance government, and the major programs driving the long-term spending path—Medicare, Medicaid, and Social Security— our recent fiscal history suggests that exempting major areas from reexamination and review can undermine the credibility and political support for the entire process. We recognize that this will not be a simple or easy process—there are no “quick fixes.” Such a process reverses the focus of traditional incremental reviews, where disproportionate scrutiny is given to proposals for new programs or activities, but not to those that are already in the base. Taking a hard look at existing programs and carefully reconsidering their goals and their financing is a challenging task. Reforming programs and activities leads to winners and losers, notwithstanding demonstrated shortfalls in performance and design. Given prior experience and political tendencies, there is little real “low-hanging fruit” in the federal budget. Across-the- board approaches to fiscal challenges may be easier in the short run, but they do not address the longer term fiscal cost drivers and cut both effective and ineffective programs alike. Given the severity of the nation’s fiscal challenges and the wide range of federal programs, the hard choices necessary to get us back on track in a sustainable manner may take a generation to address. Beginning the reexamination and review process now would enable decision makers to be more strategic and selective in choosing areas for review over a period of years. Reexamining selected parts of the budget base, over time rather than all at once, will lengthen the process, but it may also make the process more feasible and less burdensome for decision makers. And by phasing in changes to programs or policies that might otherwise have prohibitively high costs of transition, the impact can be spread out over longer time periods. Although reexamination is never easy, the effort is not without precedent. The federal government, in fact, has reexamined some of its programs and priorities episodically in the past. Programmatic reexaminations have included, for example, the 1983 Social Security reform, the 1986 tax reform, and the 1996 welfare reform. They have also included reforms such as the creation of the Department of Homeland Security and, most recently, the ongoing reorganization of the U.S. intelligence community. From a broader fiscal standpoint, the 1990s featured significant deficit-reduction measures adopted by the Congress and supported by the President that made important changes to discretionary spending, entitlement program growth, and revenues that helped eliminate deficits and bring about budgetary surpluses. States and other nations also have engaged in reexamination exercises. In our system, a successful reexamination process will in all likelihood rely on multiple approaches over a period of years. The reauthorization, appropriations, oversight, and budget processes have all been used to review existing programs and policies. Adding other specific approaches and processes—such as temporary commissions to develop policy alternatives—has been proposed. Fortunately the Government Performance and Results Act (GPRA) of 1993 and other result-oriented management laws enacted over the last 12 years have built a base of performance information that can assist the Congress and the President in this effort. In the last few years, OMB has been working to rate the effectiveness of programs under the program assessment rating tool (PART). There are also many nongovernmental sources of program evaluation and analysis. And, finally, Congress has its own analytic support—your staff and that of the Congressional support agencies. As always, GAO stands ready to assist the Congress as it develops its agenda and to help answer any of the questions the Congress wishes to pursue. Mr. Chairman, Senator Conrad, and Members of the Committee this concludes my testimony. I would be happy to answer any questions you may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses the nation's long-term fiscal outlook and the challenge it poses for the budget and oversight processes. First, GAO provides results of its most recent simulations of the long-term fiscal outlook, updating a model GAO initially developed in 1992. GAO also discusses some ideas for increasing transparency of the long-term costs of government commitments and the range of fiscal exposures. Finally, GAO discusses a forthcoming report that it believes will help the Congress in dealing with a range of performance and accountability issues. This report will provide policy makers with a comprehensive compendium of those areas throughout government that could be considered ripe for reexamination and review based on GAO's past work and institutional knowledge. The nation's fiscal policy is on an unsustainable course, and its long-term fiscal gap grew much larger in fiscal year 2004. Long-term budget simulations by GAO, the Congressional Budget Office, and others show that, over the long term the nation faces a large and growing structural deficit due primarily to known demographic trends and rising health care costs. Continuing on this unsustainable fiscal path will gradually erode, if not suddenly damage, the nation's economy, its standard of living, and ultimately its national security. The long-term outlook challenges the budget process to provide more transparency about the specific exposures that will encumber the nation's fiscal future. GAO feels that elected representatives should have more explicit information on the present value dollar costs of major spending and tax bills--before they vote on them. All simulations indicate that the problem is too big to be solved by economic growth alone or by making modest changes to existing spending and tax policies. A fundamental reexamination of major spending and tax policies and priorities will be important to recapture fiscal flexibility and update the nation's programs and priorities to respond to emerging social, economic, and security changes. |
While there is substantial variation among grant types, competitively awarded federal grants generally follow a life cycle comprising various stages—pre-award (announcement and application), award, implementation, and closeout—as seen in figure 1. Once a grant program is established through legislation, which may specify particular objectives, eligibility, and other requirements, a grant-making agency may impose additional requirements on recipients. For competitive grant programs, the public is notified of the grant opportunity through an announcement, and potential recipients must submit applications for agency review. In the award stage, the agency identifies successful applicants or legislatively defined grant recipients and awards funding. The implementation stage includes payment processing, agency monitoring, and recipient reporting, which may include financial and performance information. The closeout phase includes preparation of final reports, financial reconciliation, and any required accounting for property. Audits may occur multiple times during the life cycle of the grant and after closeout. Federal agencies do not have inherent authority to enter into grant agreements without affirmative legislative authorization. In authorizing grant programs, federal laws identify the types of activities that can be funded and the purposes to be accomplished through the funding. Legislation establishing a grant program frequently will define the program objectives and leave the administering agency to fill in the details by regulation. Adding to the complexity of grants management, grant programs are typically subject to a wide range of accountability requirements (under their authorizing legislation or appropriation) and implementing regulations, which are intended to ensure that funding is spent for its intended purpose. Congress may also impose increased reporting and oversight requirements on grant-making agencies and recipients. In addition, grant programs are subject to crosscutting requirements applicable to most assistance programs. OMB is responsible for developing government-wide policies to ensure that grants are managed properly and that grant funds are spent in accordance with applicable laws and regulations. For decades, OMB has published guidance in various circulars to aid grant-making agencies with such subjects as audit and record keeping and the allowability of costs. In the past 14 years, since the passage of P.L. 106-107, there has been a series of legislative- and executive-sponsored initiatives aimed at simplifying aspects of the grants management life cycle; minimizing the administrative burden for grantees, particularly those that obtain grants from multiple federal agencies; and ensuring accountability by improving the transparency of the federal grants life cycle. See figure 2 for more information. Governance initiatives: Intended to make changes that affect policy and oversight Process initiatives: Intended to simplify aspects of the grants lifecycle Transparency initatives: Intended to increase the transparency of information detailing federal awards and expenditures To print text version of this graphic, go to appendix II. Since the passage of P.L. 106-107, OMB and other entities involved with federal grants management have overseen several ongoing initiatives intended to address the challenges grantees encounter throughout the grants life cycle. These initiatives include consolidating and revising grants management circulars, simplifying the pre-award phase, promoting shared IT solutions for grants management, and improving the timeliness of grant closeout and reducing undisbursed balances. However, management and coordination challenges could hinder the progress of some of these initiatives. As part of the effort to implement P.L. 106-107, OMB began an effort in 2003 to (1) consolidate its government-wide grants guidance, which was located in seven separate OMB circulars and policy documents, into a single title in the Code of Federal Regulations, and (2) establish a centralized location for grant-making agencies to publish their government-wide grant regulations. The purpose of this effort was to make it easier for grantees to find and use the information in the OMB circulars and agencies’ grant regulations by creating a central point for all grantees to locate all government-wide grants requirements. As of March 2013, OMB has completed revisions on guidance related to two areas—- suspension and debarment and drug-free workplace. All grant-making agencies have relocated their suspension and debarment regulations to one title of the Code of Federal Regulations and some have relocated the drug-free workplace regulations. OMB has also been consulting with stakeholders to evaluate potential reforms in federal grant policies contained in the multiple grant circulars. As a first step, in February 2012, OMB published an advanced notice of proposed guidance detailing a series of reform ideas that would standardize information collection across agencies, adopt a risk-based model for single audits (annual audits required of nonfederal entities that expend more than $500,000 in federal awards annually), and provide new administrative approaches for determining and monitoring the allocation of federal funds. After receiving more than 350 public comments on the advanced notice of proposed guidance, OMB published its circular reform proposal in February 2013 and plans to implement the reforms by December 2013. OMB officials believe that once implemented, these reforms have the potential to make grant programs more efficient and effective by eliminating unnecessary and duplicative requirements and strengthening the oversight of grant dollars by focusing on areas such as eligibility, monitoring of subrecipients, and adequate reporting. Launched in 2003, Grants.gov is a website the public can use to search and apply for federal grant opportunities. Officials we spoke to from associations representing state and local governments, universities, and nonprofits praised Grants.gov. Many noted that it simplified the pre-award stage by making it easier for applicants to search for and identify federal grant funding opportunities. Specifically, one organization said the site does an excellent job categorizing grants by topic, making it easier for resource-constrained applicants that may not have a professional grant writer to search for relevant grants. However, grantee association officials also raised concerns about aspects of the site. For example, although there is an OMB policy directive establishing a standard format for federal funding opportunity announcement requirements, grantee officials said that in practice the lack of a standardized grants announcement can increase their burden because extra time is required to determine eligibility and other requirements. We have also reported that persistent management challenges, such as a lack of performance measures and communication with stakeholders and unclear roles and responsibilities among the governance entities, have adversely affected Grants.gov operations. Since we first reported on these issues in July 2009, HHS has made some progress to address these challenges and increase the effectiveness and long-term viability of Grants.gov. Specifically, HHS is taking steps to implement several of our prior recommendations. For example, in 2012, the Program Management Office (PMO) adopted a performance monitoring tool that currently monitors 22 technical measures covering availability, usage, and performance. The PMO also hired a communications director whose responsibilities include outreach to stakeholders. The PMO reported that starting in fiscal year 2013, HHS plans to more actively solicit input from grants applicants on ways to enhance the site. While it is too soon to determine the effectiveness of these reforms, tracking site performance and developing an effective two- way communication strategy to engage with stakeholders are practices which, if thoughtfully and deliberately implemented, may address the challenges we identified. Promoting shared information technology (IT) solutions for managing grants—an original goal of P.L. 106-107 and the governance bodies charged with implementing the legislation—could provide an additional way to simplify post-award grants management activities by consolidating the administration and management of grants across agencies and potentially reducing the costs of multiple agencies developing and maintaining grants management systems. However, it is unclear whether promoting shared IT systems for grants management is still a priority, and if so, which agency is in charge of this effort. In 2004, OMB established the GMLOB to develop government-wide solutions intended to support end-to-end grants management activities, including shared grants management systems (which could include modules for intake of applications, peer review, award, payment, and performance monitoring and final closeout of the grant award). In 2005, OMB chose three agencies—the National Science Foundation (NSF), the Administration for Children and Families within the Department of Health and Human Services (HHS), and the Department of Education—to develop grants management systems that they could provide for other agencies. Currently, NSF operates Research.gov, which has one other external agency customer that uses individual modules of the Research.gov system; the Administration for Children and Families operates GrantSolutions.gov, which services 17 government customers, 8 of which are HHS components; and Education operates G5, which has 13 customers all of which are Education components (see appendix III for a list of NSF, HHS, and Education customers). Since 2012, there has been uncertainty regarding the status of and future plans for the operational elements of what was the GMLOB. OMB folded GMLOB into the Financial Management Line of Business (FMLOB)—an initiative focused on financial systems improvements— in 2012, and initially announced the Treasury Department would be the managing partner. Later, OMB informed us the General Services Administration (GSA) would be the managing partner, but GSA officials informed us they were only the managing partner of the FMLOB from June to September 30, 2012. GSA officials also told us that according to OMB officials, GSA would not be responsible for working with NSF, HHS, or Education, or promoting shared service agreements for grants management systems. As of March 2013, OMB had not publicly announced who the managing partner of FMLOB would be for fiscal year 2013. After receiving a draft copy of this report for its review and comment, OMB issued a “Controller Alert” on April 29, 2013, announcing that, for fiscal year 2013, the Department of the Treasury’s Office of Financial Innovation and Transformation (FIT) will serve as Managing Partner and the Program Management Office for the FMLOB. OMB also highlighted the Controller Alert in its comment letter to us, also dated April 29, 2013 (see appendix IV for OMB’s letter). In May 2012, OMB issued guidance directing agencies to find ways to spend federal dollars on IT more efficiently to compensate for a 10 percent reduction in overall IT spending. The guidance also directed agencies to propose how they would reinvest the savings from proposed cuts to produce a favorable return on investments. One of the strategies OMB had previously highlighted to reduce duplication, improve collaboration, and eliminate waste across agency boundaries was the Federal IT Shared Services Strategy, also referred to as “Shared First,” an effort to share common IT services across agencies. The guidance did not specifically mention grants management systems, and it is unclear whether OMB intends to encourage other agencies to partner with NSF, HHS, and Education to continue sharing services. In its April 29, 2013, Controller Alert, OMB stated that in accordance with OMB’s guidance on shared services, the Treasury’s FIT will “lead efforts to transform federal financial management, reduce costs, increase transparency, and improve delivery of agencies’ missions by operating at scale, relying on common standards, shared services, and using state-of-the-art technology.” However, OMB’s Controller Alert did not address whether the roles of NSF, HHS, and Education would change as a result of FIT’s leadership in this area. As part of its efforts to improve grants management government-wide, OMB has instructed agencies to improve the timeliness of their grant closeout procedures. Once the grant’s period of availability to the grantee has expired, the grant can be closed out and the funds deobligated by the awarding agency. Timely closeout helps to ensure that grantees have met all financial and reporting requirements. It also allows federal agencies to identify and redirect unused funds to other projects and priorities as authorized or to return unspent balances to the Department of the Treasury. In August 2008, we reported that during calendar year 2006 about $1 billion in undisbursed funding remained in expired grant accounts in the largest civilian payment system for grants, the Payment Management System. In a follow-up report issued in April 2012, we found that at the end of fiscal year 2011 there was more than $794 million in funding remaining in expired grant accounts. To improve the timeliness of grant closeout, we recommended that OMB instruct all executive departments and independent agencies to annually track the amount of undisbursed grant funding remaining in expired grant accounts and report on the status and resolution of the undisbursed funding in their annual performance plan and annual performance and accountability report. In response to our recommendations, on July 24, 2012, the Controller of OMB issued a “Controller Alert” to all federal chief financial officers instructing agencies to take appropriate action to close out grants in a timely manner. The alert provided strategies agencies should consider to achieve this goal, including establishing annual or semiannual performance targets for timely grant closeout, monitoring closeout activity, and tracking progress in reducing closeout backlog. In a September 2012 report, we identified certain key features for effective interagency collaborative efforts, including the importance of identifying goals for short-and long-term outcomes. Identifying goals can help decision makers reach a shared understanding of what problems genuinely need to be fixed, how to balance differing objectives, and what steps need to be taken to create not just short-term advantages but long- term gains. In February 2013, COFAR posted five priority goals for fiscal years 2013 to 2015 to the U.S. Chief Financial Officers Council website: 1. Implement revised guidance to target risk and reduce administrative burden. 2. Standardize federal agencies’ business processes to streamline data collections. 3. Provide public validated financial data that aligns spending information with core financial accounting data in coordination with the work of the GATB. 4. Ensure that federal agencies’ grants professionals are highly qualified. 5. Reduce the number of unclean audit opinions for grant recipients. For each priority, COFAR identified proposed deliverables and milestone dates for those deliverables. As of May 2013, COFAR had not released to the public an implementation plan that includes other key elements such as performance targets, mechanisms to monitor, evaluate, and report on progress made towards stated goals, and goal leaders who can be held accountable for those goals. Establishing implementation goals and tracking progress toward those goals helps to pinpoint performance shortfalls and suggest midcourse corrections, including any needed adjustments to future goals and milestones. Reporting on these activities can help key decision makers within the agencies, as well as stakeholders, obtain feedback for improving both policy and operational effectiveness. In response to the draft report we provided for them to review, OMB officials stated in their comment letter dated April 29, 2013, that they used a more detailed internal project plan to monitor timelines and roles and responsibilities. They acknowledged that more needs to be done by pointing out that as the work of COFAR matured, the council would be better able to articulate metrics that allowed for a more thorough evaluation of whether the policy changes were having their intended impacts. They added that the publically-stated deliverables were intended to leave room for further evolution of the right approach for implementation. While we have not been able to assess or validate OMB’s newly provided information on COFAR’s approach, we believe a more detailed, publically-available implementation plan that will allow Congress and the public to better monitor the progress of the reforms is needed. We previously reported that when interagency councils clarify who will do what, identify how to organize their joint and individual efforts, and articulate steps for decision making, they enhance their ability to work together and achieve results. In interviews with federal grant management officials we were told that OMB and the council do not always clearly articulate the roles and responsibilities for various streamlining initiatives, plans for future efforts, and means for engaging small grant-making agency stakeholders and utilizing agency resources. Agency officials involved with current grants management reforms told us that the roles and responsibilities for various streamlining initiatives are not always clear. For example, OMB designated Treasury as the managing partner of the FMLOB initiative, then designated GSA as the managing partner, but only for four months. As of March 2013, OMB had not issued a subsequent announcement as to which agency would take over the grants management related functions of FMLOB after GSA. In the meantime, the former GMLOB consortia leads are unsure whether promoting shared grants management systems is still a priority. As previously mentioned, OMB’s Controller Alert of April 29, 2013, announced that Treasury’s FIT office will serve as Managing Partner and the Program Management Office for the FMLOB for fiscal year 2013. However, the Controller Alert did not address whether the roles of NSF, HHS, and Education would change as a result of FIT’s leadership in this area. In addition to OMB, eight agencies are permanent members of COFAR. COFAR also has a rotating member, currently NSF, which serves a two- year term. Agency officials involved with COFAR told us that the council is still determining the role of the rotating agency and how COFAR will reach out to smaller grant-making agencies not on the council. According to OMB officials, they are still working out how to provide other agencies with a communication channel and the opportunity to review and comment on proposed changes. In its April 29, 2013, comment letter, OMB acknowledged that the expectation was that the rotating member would be able to represent the views of smaller agencies and that there may be federal officials or agencies that wish to be more involved or not fully aware of the all the COFAR’s work. OMB officials also stated that COFAR staff will help the rotating agency gather input and feedback from the broader collection of smaller agencies. OMB officials said incorporating the views of all federal grant-making agencies was essential to the work of the COFAR and that their strategy would continue to evolve over time, as it will for engaging with nonfederal stakeholders. Agency officials also told us that they are still trying to determine how to bring together financial, policy, and IT staff, and incorporate their areas of expertise into discussions on proposed policy and program changes. One agency official noted this had been a challenge with the previous grants management structure. She said that the GPC focused on policy and the GEB focused on systems and technology solutions and, even though there was some level of overlap among the people staffing the two boards, a stronger connection was needed to ensure that streamlining efforts included technology and policy expertise. In their comment letter, OMB officials stated they made repeated efforts to solicit the views of all federal agencies through town hall meetings, formal circulation of draft policies for comment prior to publication, and conference calls to share information on key issues. We have noted that communication is not just “pushing the message out,” but should facilitate a two-way, honest exchange and allow for feedback from relevant stakeholders. We previously reported that grantees felt that the lack of opportunities to provide timely feedback resulted in poor implementation and prioritization of streamlining initiatives and limited grantees’ use and understanding of new functionality of electronic systems. For example, grantees experienced problems stemming from policies and technologies that were inconsistent with their business practices and caused inefficiencies in their administration of grants. Members of the grantee community told us they continue to have concerns because they do not see a role for themselves as OMB and COFAR develop priorities for reforming federal grants management. For example, officials from the eight associations representing state and local governments, universities, and nonprofit recipients told us that outreach to grantees on proposed reforms continues to be inconsistent or could be improved. Ten organizations representing state and local officials, including some of the same organizations we interviewed, submitted a letter to OMB after the creation of COFAR was announced, expressing their disappointment that there would be no state or local representation on the council. In the letter, the state and local officials stated that formal engagement of all stakeholder parties is necessary for success and that their exclusion from the council undermined the important work of the council before it even commenced. OMB officials stated they are seeking different forums to engage with members of the grantee community. Several association officials said they appreciated that OMB reached out to them for comment before proposing changes to OMB circulars. OMB and COFAR also hosted a webinar in February 2013 to coincide with the circular reform proposal, and invited representatives from grantee associations to discuss their concerns and ask questions. In addition, following their review of the draft report, OMB officials provided us with a list of invitations for speaking engagements they have accepted since February 2013 as a snapshot of the types of engagements they participate in to communicate with interested stakeholder groups. While improved outreach to the broader grantee community is an ongoing challenge, certain groups of grantees have established communication channels with the federal government. These approaches could be a useful model for COFAR to build upon with different grantee communities. For example, we have previously reported that the research community established avenues of communication with relevant federal agencies through the Federal Demonstration Partnership (FDP), a cooperative initiative of 10 agencies and over 90 research institutions. Agency officials and members of the research community continue to describe this partnership as an effective model for promoting two-way communication. Officials from the HHS Grants.gov PMO told us they solicit information and feedback related to the functionality of Grants.gov through quarterly meetings and open forum-type sessions with FDP members. According to these officials, consistent communication with the FDP has enabled them to survey the community and determine appropriate improvements to the system to avoid undertaking inefficient or counterproductive revisions to the Grants.gov system. Likewise, a FDP official told us face-to-face meetings with grantor agency officials allow them to provide input on proposed changes to grants management policies and practices. In a second example, several state and local grantee association officials referred to the communication channels that were set up while implementing the Recovery Act as an example of effective two-way communication they would like to see replicated. In the same letter submitted to OMB after the creation of COFAR was announced, 10 organizations representing state and local officials referenced the constant and consistent communication OMB and the Recovery Board engaged in with members of the grantee community as a requirement for success. We have also previously reported that OMB and Recovery Board officials held weekly conference calls with state and local representatives to hear comments, concerns, and suggestions from them and share decisions. As a result of these calls, federal officials changed their plans and related guidance. This type of interaction was essential in clarifying federal intent, addressing questions, and establishing working relationships for the implementation efforts. However, several officials said these outreach efforts have dwindled, and they again feel OMB is not involving them in COFAR priority-setting discussions. Although the circumstances surrounding the Recovery Act were unusual in that there was a high level of funding available that had to be spent quickly, there are opportunities for COFAR to learn what communication strategies worked for agency officials and grantees, and apply those strategies. Another possible mechanism for improving communication with states and localities might be to use the Partnership Fund for Program Integrity Innovation (Partnership Fund) as a venue for federal policymakers to communicate and engage with the grantee community on proposed grants management reforms. Established by the 2010 Consolidated Appropriations Act, and administered by OMB, the Partnership Fund allows federal, state, local, and tribal agencies to pilot innovative ideas for improving assistance programs in a controlled environment. We previously reported that as part of implementing the Partnership Fund, OMB established a Federal Steering Committee, consisting of senior policy officials from federal agencies that administer benefits programs and formed the “Collaborative Forum.” The Collaborative Forum is made up of state representatives and stakeholder experts, including federal agencies, nongovernmental organizations, and others, who collaborate to generate, develop, and consult on potential pilot projects. The forum’s website, http://collaborativeforumonline.com, is used to hold discussions about potential projects and to share lessons and best practices among members. In a time of fiscal constraint, continuing to support the current scope and breadth of federal grants to state and local governments will be a challenge. Given this fiscal reality, it becomes more important to design and implement grants management policies that strike an appropriate balance between ensuring accountability for the proper use of federal funds without increasing the complexity and cost of grants administration for agencies and grantees. Duplicative, unnecessarily burdensome, and conflicting grants management requirements result in resources being directed to nonprogrammatic activities, which could prevent the cost- effective delivery of services at the local level. Streamlining and simplifying grants management processes is critical to ensuring that federal funds are reaching the programs and services Congress intended. In October 2011, OMB created COFAR and tasked it with overseeing the development of federal grants management policy. Although COFAR recently identified some priorities, it has not yet released to the public an implementation plan that includes performance targets, mechanisms to monitor, evaluate, and report on progress made towards stated goals, and goal leaders who can be held accountable for those goals. Although OMB officials provided us with some additional and updated information in their comment letter that we were unable to assess or validate, they agreed with our recommendations that OMB and COFAR need to develop an implementation schedule and mechanisms to monitor, evaluate and report on results, clarify roles and responsibilities for the various streamlining initiatives and engagement with federal stakeholders, and develop an effective two-way communication strategy that includes the grant recipient community. OMB officials acknowledged that more needs to be done to clarify roles and responsibilities and plans for moving forward with various streamlining initiatives. Moreover, stakeholders continue to express frustration about limited opportunities to provide feedback on proposed reforms. If grantees remain isolated from COFAR’s development of new grants management systems and policies, those systems and policies could be ineffective or require more resources to use. We recommend the Director of OMB, in collaboration with the members of COFAR, take the following three actions: 1. Develop and make publicly available an implementation schedule that includes performance targets, goal leaders who can be held accountable for each goal, and mechanisms to monitor, evaluate, and report on results. 2. Clarify the roles and responsibilities for various streamlining initiatives and steps for decision making, in particular how COFAR will engage with relevant grant-making agency stakeholders and utilize agency resources. 3. Improve efforts to develop an effective two-way communication strategy that includes the grant recipient community, smaller grant- making agencies that are not members of COFAR, and other entities involved with grants management policy. We provided a draft of this report to OMB, Education, GSA, HHS, and NSF for comment. NSF and HHS provided technical comments, which we incorporated as appropriate. In its written comments, OMB generally concurred with our findings and recommendations but also said there had been significant progress on the grants management streamlining process in recent months, including using a more detailed project plan internally to monitor progress made towards the priorities established for COFAR; making efforts to solicit the views of all federal agencies including town hall meetings, formal circulation of draft policies for comment prior to publication, and conference calls to share information on key issues; and using meetings, webinars, and teleconferences to inform a diverse cross section of stakeholder groups about the work that the COFAR is doing, and to get their feedback on upcoming policy changes. Because OMB only provided us with additional and updated information at the end of its comment period, we could neither verify nor validate it. However, we have incorporated OMB’s comments into the body of the report, as appropriate, in order to make our review as up-to-date as possible. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Education, and Health and Human Services; Administrator of GSA; Director of the National Science Foundation; the Director of the Office of Management and Budget and to appropriate congressional committees. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions or wish to discuss the material in this report further, please contact me at (202) 512-6806 or czerwinskis@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in Appendix V. We were asked to examine federal grants management reform efforts. To accomplish this, we reviewed (1) what the Office of Management and Budget (OMB) and other federal grants governance bodies have done since the passage of P.L. 106-107 in 1999 to reform grants management processes and reduce unnecessary burdens on applicants, grantees, and federal agencies; and (2) what actions, if any, have been taken to address what we have found to be persistent management challenges, such as the lack of a comprehensive plan for implementing reforms, confusion over roles and responsibilities among grants governance bodies, and inconsistent two-way communication with stakeholders. To address both objectives, we reviewed P.L. 106-107; and OMB circulars and guidance such as OMB-12-01, “Creation of the Council on Financial Assistance Reform,” OMB A-102, “Grants and Cooperative Agreements With State and Local Governments,” and A-110, “Uniform Administrative Requirements for Grants and Other Agreements with Institutions of Higher Education, Hospitals and Other Non-Profit Organizations,” which describe administrative requirements for different types of grantees, and OMB’s February 2012 advanced notice of proposed guidance, which proposes several ideas for circular reforms. We also reviewed action plans created by former and current interagency councils with responsibility for overseeing grants management reforms, as well as our previous work and other literature on grants management initiatives and the related challenges that have undermined the government’s ability to simplify grants management processes, reduce unnecessary burden on applicants, grantees, and federal agencies, and improve delivery of services to the public. We also reviewed our previous work on collaborative mechanisms and management consolidation efforts. We interviewed officials from OMB who are involved with developing and implementing government-wide grants management policy; officials at the three agencies that served as consortia leads for the 2004 to 2012 Grants Management Line of Business (GMLOB) e-government initiative: the National Science Foundation (NSF), Health and Human Services (HHS), and the Department of Education; and officials at the agency that managed the Financial Management Line of Business (FMLOB) e- government initiative in 2012: the General Services Administration (GSA). To capture the perspective of grantor agencies, we spoke to officials from HHS, NSF, and the Department of Education in their grant-making and administration capacities. To understand grantee perspectives, we interviewed officials from grantee associations that represent a variety of grantee types including state and local governments, nonprofit organizations, and universities. To select the grantee associations that we interviewed, we relied on three data sources: 4. Our previous work on grant streamlining which included 31 grantee associations separated into four categories: state government, local and regional government, nonprofits, and tribal; 5. A list of grant associations included on the Grants.gov website; and 6. Additional grantee associations that have been active in grants- related topics in the past. We selected 16 grantee associations to contact. These associations represented a variety of grantee types from state and local government, nonprofit organizations, as well as associations representing grantees on crosscutting grants related issues. In addition, the associations could offer a historical perspective on federal efforts to streamline grants management. Of the 16 associations we contacted, 8 associations said they were knowledgeable about grants management reforms and could answer our questions. We interviewed officials at these 8 associations: National Association of State Auditors, Comptrollers, and Treasurers National Association of State Budget Officers National Association of Regional Councils National Association of Counties National Grants Management Association National Grants Partnership Federal Demonstration Partnership National Association of Chief Information Officers Two additional associations, Federal Funds Information for States and National Council of Nonprofits, sent us comments on grants management reforms in writing. We conducted this performance audit from July 2012 to May 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Content To address grants management issues, the act required the Office of Management and Budget (OMB) to direct, coordinate, and assist federal agencies in establishing common grants management systems, and simplifying their application, administrative, and reporting procedures with the goal of improved efficiency and delivery of services to the public. The law sunsetted in 2007. The Chief Financial Officers (CFO) Council established the GPC to implement P.L. 106-107. Composed of grants policy experts from across the federal government, the GPC oversaw the efforts of cross-agency work groups focusing on different aspects of grants management, recommended policies and practices to OMB, and coordinated related interagency activities. OMB replaced the GPC in 2011 with the Council on Financial Assistance Reform (COFAR). This board consisted of senior officials from federal grant-making agencies and provided strategic direction and oversight of Grants.gov, a grant identification and application portal. OMB coordinated grants management policy through the board and the GPC until October 2011, when OMB announced that COFAR would replace both of these federal grant bodies. In response to P.L. 106-107, OMB created Grants.gov, a central grant identification and application website for federal grant programs. The Grants.gov oversight and management structure includes HHS, the managing partner agency, the Grants.gov Program Management Office, which is housed within HHS and responsible for day-to-day management, and formerly the GEB which provided leadership and resources. The GPC was also involved because of its role in streamlining pre-award policies and implementing P.L. 106-107. Established to support the development of a government-wide solution to support end to-end grants management activities that promote citizen access, customer service, and agency financial and technical stewardship. In 2005, OMB selected the Department of Health and Human Services (HHS) and the National Science Foundation (NSF) to jointly lead the effort. Later, NSF took over the leadership role. In fiscal year 2012, it became part of the Financial Management Line of Business. Transparency This act required OMB to establish a free, publicly accessible website containing data on federal awards and subawards. OMB began providing data on federal awards on USAspending.gov in December 2007 and phased in reporting on subawards in 2010. Transparency Congress and the administration built provisions (such as quarterly use and outcome reporting) into the Recovery Act to increase transparency and accountability over spending. The Recovery Act called for a website (Recovery.gov) for the public to access reported data. A second website (FederalReporting.gov) was established so grant recipients could report their data. The Recovery Act also established the Recovery Accountability and Transparency Board to coordinate and conduct oversight of funds distributed under the act in order to prevent fraud, waste, and abuse. Type Transparency This board, established by an executive order, provides strategic direction for enhancing the transparency of federal spending and advance efforts to detect and remediate fraud, waste, and abuse in federal programs. It is charged to work closely with the existing Recovery Board to extend its successes and lessons learned to all federal spending. This council replaced the GPC and GEB in October 2011. OMB charged COFAR with identifying emerging issues, challenges, and opportunities in grants management and policy and providing recommendations to OMB on policies and actions to improve grants administration. COFAR is also expected to serve as a clearinghouse of information on innovations and best practices in grants management. COFAR is made up of the OMB Controller and the Chief Financial Officers from the largest eight grant-making agencies and one of the smaller federal grant-making agencies. The latter serves a rotating 2-year term. In February 2012, OMB published an advanced notice of proposed guidance detailing a series of reform ideas that would standardize information collection across agencies, adopt a risk-based model for single audits, and provide new administrative approaches for determining and monitoring the allocation of federal funds. After receiving more than 350 public comments on its advanced notice of proposed guidance, OMB published its circular reform proposal in February 2013, and plans to implement the reforms by December 2013. To improve the timeliness of grant close out and reduce undisbursed balances, the Controller of OMB issued a “Controller Alert” to all federal chief financial officers instructing agencies to take appropriate action to closeout grants in a timely manner. It provided a number of strategies such as establishing annual performance targets for timely grant close out. In addition to the contact named above, Thomas M. James, Assistant Director, and Elizabeth Hosler, and Jessica Nierenberg, Analysts-in- Charge, supervised the development of this report. Travis P. Hill, Melanie Papasian, and Carol Patey made significant contributions to all aspects of this report. Elizabeth Wood assisted with the design and methodology, Amy Bowser provided legal counsel, Donna Miller developed the report’s graphics, and Susan E. Murphy and Sandra L. Beattie verified the information in this report. Other important contributors included Beryl Davis, Kim McGatlin, Joy Booth, and James R. Sweetman, Jr. | GAO has previously identified several management challenges that have hindered grants management reform efforts. GAO was asked to review recent federal grants management reform efforts. GAO reviewed (1) what OMB and other federal grants governance bodies have done since the passage of P.L. 106-107 to reform grants management processes, and (2) what actions, if any, have been taken to address what GAO has found to be persistent management challenges. GAO reviewed relevant legislation, OMB circulars and guidance, action plans of interagency councils responsible for overseeing grants management reforms, and previous GAO work and other literature on grants management reforms. GAO also reviewed its previous work on collaborative mechanisms and management consolidation efforts. GAO also interviewed officials from OMB, grant-making agencies, and associations representing a variety of grantee types. In the past 14 years, since the passage of the Federal Financial Assistance Management Improvement Act of 1999 (P.L. 106-107), there has been a series of legislative- and executive-sponsored initiatives aimed at reforming aspects of the grants management life cycle. Recently, a new grants reform governance body, the Council on Financial Assistance Reform (COFAR), replaced two former federal boards--the Grants Policy Committee (GPC) and Grants Executive Board (GEB). The Office of Management and Budget (OMB) created COFAR and charged it with identifying emerging issues, challenges, and opportunities in grants management and policy and providing recommendations to OMB on policies and actions to improve grants administration. In addition to this new governance structure, OMB and other entities involved with federal grants management are overseeing several ongoing reform initiatives intended to address the challenges grantees encounter throughout the grants life cycle. These initiatives include consolidating and revising grants management circulars, simplifying the pre-award phase, promoting shared information technology (IT) solutions such as the development of shared end-to-end grants management systems, and improving the timeliness of grant close out and reducing undisbursed balances. Management and coordination challenges could hinder the progress of some of these initiatives. For example, although promoting shared IT solutions for grants management--an original goal of P.L. 106-107--remains a priority, there has been uncertainty regarding the status of this initiative and future plans for it. The lead agency for this initiative changed several times since 2012, and it has been unclear at times whether promoting shared IT systems for grants management would continue to be a priority, and if so, which agency was in charge. After receiving GAO's draft report for review, OMB issued a "Controller Alert" on April 29, 2013, announcing that the Department of the Treasury would lead efforts to transform federal financial management by, among other things, relying on common standards, shared services, and using state-of-the-art technology. Although COFAR has recently identified several high-level priority goals for 2013 through 2015, it faces some of the same management challenges identified in previous GAO reports on grants management, such as the lack of a comprehensive plan for implementing reforms, confusion over roles and responsibilities among grants governance bodies, and inconsistent communication and outreach to the grantee community. COFAR has not yet released to the public an implementation plan that includes key elements such as performance targets and goal leaders for each goal, and mechanisms to monitor, evaluate, and report on progress made toward stated goals. Furthermore, agencies involved with current grants management reforms are not always clear on their roles and responsibilities for various streamlining initiatives which may cause such initiatives to languish. Finally, GAO found that members of the grant recipient community continue to voice concern because they do not see a role for themselves as OMB and COFAR develop priorities for reforming federal grants management. In the comments it provided on April 29, 2013, OMB described actions it is taking to address these challenges, such as using a more detailed project plan internally and scheduling outreach events with federal partners and members of the grantee community. GAO recommends that the Director of OMB: (1) develop and make publiclyavailable an implementation schedule that includes performance targets, goal leaders who can be held accountable for each goal, and mechanisms to monitor, evaluate, and report on results; (2) clarify the roles and responsibilities for various streamlining initiatives; and (3) develop an effective two-way communication strategy with relevant stakeholders. OMB generally concurred with our recommendations and provided additional and updated information, which was incorporated into the report as appropriate. |
The federal regulatory structure of the U.S. securities markets was established by the Securities Exchange Act of 1934 (the Exchange Act). Congress also created SEC as an independent agency to oversee the securities markets and their participants. Under the Exchange Act, the U.S. securities markets are subject to a combination of industry self- regulation (with SEC oversight) and direct SEC regulation. This regulatory scheme was intended to give SROs responsibility for administering their ordinary affairs, including most of the daily oversight of the securities markets and broker-dealers. The Exchange Act provides for different types of SROs, including national securities exchanges and national securities associations. Entities operating as national securities exchanges or associations are required to register as such with SEC. As of March 31, 2002, nine securities exchanges were registered with SEC as national securities exchanges. As of the same date, NASD was the only registered national securities association; NASD Regulation (NASDR) is its regulatory arm. Although it is the SRO, NASD delegates to NASDR, its wholly owned subsidiary, SRO responsibilities for surveilling trading on Nasdaq and the over-the-counter market and for enforcing compliance by its members (and persons associated with its members) with applicable laws and rules. Nasdaq also surveils trading on its market and refers potential violations to NASDR and SEC for investigation. While NASD is currently the parent company of Nasdaq, NASD is in the process of selling Nasdaq. Recognizing the inherent conflicts of interest that exist when SROs are both market operators and regulators, the Exchange Act states that to be registered as a national securities exchange or association, SEC must determine that the exchange’s or association’s rules do not impose any burden on competition and do not permit any unfair discrimination. SROs are also responsible for enforcing members’ compliance with their rules and with federal securities laws by conducting surveillance of trading in their markets and examining the operations of member broker-dealers. The Exchange Act also mandates that securities SROs operate under direct SEC oversight and authorizes SEC to ensure that SROs do not abuse their regulatory powers. SEC inspects SROs to ensure that they are fulfilling their SRO duties, focusing on, among other things, the quality of SRO financial operations examination programs; market surveillance, investigations, and disciplinary programs; and customer complaint review programs. SEC also reviews rule changes proposed by SROs for consistency with the Exchange Act and SEC rules. Finally, SEC provides direct regulation of the markets and their participants in a number of ways, including direct examinations of broker-dealers, investigations into markets and their participants, disciplinary actions for violations of the Exchange Act, and promulgation of rules and regulations. Nasdaq increasingly has been in competition with NASD members that operate as ECNs, while NYSE has competed for many years with members that trade its listed securities off of the exchange. This competition has heightened some SRO members’ concerns that an SRO could abuse its regulatory authority through rule-making processes, disciplinary actions, or use of proprietary information. Market participants expect that demutualization will increase the ability of exchanges and other markets to compete both domestically and internationally, however, their views differ on how it might affect potential abuses of regulatory authority related to conflicts of interest. SEC generally concluded that it is too soon to predict the effects of demutualization. Concerns about conflicts of interest persist despite measures by SEC and the SROs that are intended to address them. NASD’s dual roles as the owner-operator of Nasdaq and as the primary SRO for the 11 ECNs that compete with Nasdaq have created conflicts of interest between NASD’s economic interests and regulatory responsibilities, which NASD’s pending spin off of Nasdaq is intended to mitigate (discussed further below). SEC regulations require ECNs, as registered broker-dealers, to be members of at least one SRO. According to an ECN official, the ECNs chose NASD as their primary SRO because the unique trading rules as well as other features of the Nasdaq market were conducive to the growth of the ECNs’ business. ECNs are an alternative to the Nasdaq market for trading in Nasdaq stocks. They differ from Nasdaq and registered exchanges principally in that they do not require an intermediary to execute orders. ECNs match orders electronically and anonymously, while Nasdaq broker-dealers, in their roles as market makers, act as intermediaries for all customer orders. In deciding whether to use an ECN or a Nasdaq market maker, customers consider such factors as execution quality, transaction costs, and anonymity. The number of ECNs and their share of total Nasdaq volume have grown significantly since 1993. According to SEC, in 1993 all alternative trading systems (including one ECN) accounted for about 13 percent of the total volume in Nasdaq securities. By October 2001, ECNs alone accounted for over 30 percent of the total volume in Nasdaq securities. SEC and certain ECNs have attributed a significant part of the growth in the volume of Nasdaq securities traded on ECNs to the SEC order- handling rules that SEC promulgated to enhance competition and pricing efficiency in the securities markets. Before the rules became effective in 1997, only ECN subscribers had access to the orders and, thus, to the prices that ECNs displayed for Nasdaq securities. Implementation of the rules resulted in ECNs’ orders for Nasdaq securities being displayed and accessible to the public on Nasdaq, thereby providing the public an opportunity to obtain any better prices that might be available on ECNs.According to one ECN, both Nasdaq’s access to ECNs and the efficiencies that ECNs brought to the Nasdaq market through the electronic matching of orders have contributed to the overall growth of trading in Nasdaq securities. NYSE, as an SRO that operates a market, has also confronted conflicts of interest between its economic interests and its regulatory responsibilities. Specifically, for many years the exchange has regulated competing member broker-dealers that trade its listed stocks off of the exchange. Customer orders for NYSE stocks that are not sent to the floor of the exchange to be executed are executed internally by a broker-dealer or in an alternative market. A broker-dealer internalizes an order when it executes a customer order for a security in house or directs the order to an affiliated dealer, instead of sending the order to an exchange or another market. Numerous large broker-dealers that are NYSE members have also established relationships with regional exchange specialists and sometimes route their orders to them instead of to NYSE. In addition, member broker-dealers direct orders to alternative markets, such as ECNs or third-market broker-dealers. Competition with member broker-dealers may increase with the May 2000 rescission of NYSE Rule 390, which had restricted off-exchange trading by NYSE members in NYSE-listed securities. Some SRO members expressed concern that increased competition between SROs and their members had given SROs a greater incentive to abuse their regulatory authority. These members were concerned that SROs could adopt rules that unfairly impede the ability of members to compete against the SROs—for example, by adopting rules that give preference to noncompetitors’ orders. An official from one broker-dealer also noted that an SRO might sanction a competing member more severely than other members by, for instance, inappropriately concluding that the member had failed to satisfy its best-execution obligation when it routed an order to a competing market for execution rather than to the SRO. ECNs have also expressed concern that an SRO, in its regulatory capacity, could obtain proprietary information from a member and, in its capacity as a market operator, inappropriately use the information. For example, an SRO might obtain proprietary information about its members’ customers and then use that information to market its services to the customers. Some institutional market users that were not SRO members were more broadly concerned about how conflicts of interest in the self-regulatory structure affected the fairness and efficiency of the securities markets. These market users asserted that the self-regulatory structure was inherently biased in favor of broker-dealers that were SRO members and owners and that SROs interpreted their rules to favor these broker-dealers. These market users, as well as some broker-dealers, told us that they did not believe that their concerns were addressed when these concerns diverged from the interests of the most powerful broker-dealers at the exchange. Market users also said that the current self-regulatory structure ultimately impeded market-driven innovations that could improve competition and benefit the investing public. One investment company official cited NYSE Rule 390, which had been in place for 20 years, as a classic example of the difficulty of repealing an anticompetitive SRO rule. Demutualization has heightened the concerns of some SRO members about the potential for abuses of regulatory authority. They expressed concern that a demutualized, for-profit market operator might be more likely to misuse its regulatory authority or be less diligent in fulfilling its regulatory responsibilities in a desire to increase profits. For example, demutualized SROs might have a greater incentive to propose rules that unfairly disadvantage members or other markets or inappropriately sanction or otherwise discipline members against which the SROs compete. Other SRO members expressed concern that demutualized market operators might have a greater incentive to either insufficiently fund or otherwise inadequately fulfill their self-regulatory responsibilities. However, other market participants believed demutualization could reduce at least some conflicts and lead to needed changes in market structure. Market users such as mutual funds asserted that by diversifying market ownership through the sale of stock, and thus reducing the influence of broker-dealers, demutualization could reduce the conflicts of interest inherent in a self-regulatory structure based on member-owned markets that regulate themselves. According to these market users, diversifying the exchange ownership base could shift management’s focus from the narrow interests of intermediaries to the broader interests of all market participants, potentially benefiting the investing public. According to NYSE officials, demutualization and for-profit status raise no new issues for the exchange. NYSE could demutualize or its members could become its shareholders without any change in the incentives that currently motivate exchange actions. That is, demutualization does not introduce any new conflicts of interest issues. NYSE’s chairman noted that the exchange would continue to have a strong economic incentive to preserve its reputation as a well-regulated entity, regardless of its organizational structure. Demutualization is expected to enhance the ability of markets to compete by enabling them to raise capital in the securities markets to fund business efforts and by better aligning the economic interests of markets and their owners. Under current member-owned structures, actions markets might otherwise take to enhance their competitiveness might be rejected or adopted very slowly by member-owners that do not perceive a direct benefit from them. For example, member-owners (that is, broker-dealers) that derive income from acting as intermediaries in the trade execution process might be reluctant to support the introduction of technology if it reduces their income from acting as intermediaries. In contrast, shareholders of a demutualized exchange would be expected to support cost-effective technology that improves customer service and thus the competitiveness of the market, because they would expect it to increase the value of their investments by attracting more business to the exchange. To improve their competitiveness, Nasdaq and the Pacific Exchange, as well as several U.S. futures and foreign exchanges, have demutualized or are in the process of doing so. In 1999, NYSE also announced plans to demutualize but subsequently postponed its plans indefinitely. An SEC economist said that the effects of demutualization could not be predicted, as they depended on a balance between the competing incentives of maximizing profits and providing effective regulation. The balance between these incentives would differ depending on who owned and controlled the market. Also, as under the current ownership structure, the incentive to reduce regulatory costs would be balanced against the risk that any resulting reduction in regulation might harm the public’s confidence in the integrity of the market. A loss of public confidence could ultimately reduce profitability if, for example, investors moved their transactions to other markets. SEC officials further explained that both for-profit and not-for-profit SROs face inherent conflicts of interest, but noted that demutualization has the potential to heighten or create variations of existing conflicts of interest. SEC officials stated, for example, that while all SROs face pressure to minimize the costs of fulfilling their regulatory obligations, for-profit entities could be more aggressive in promoting their commercial interests, such as by using regulatory fees to finance nonregulatory functions. SEC officials emphasized, however, that because conflicts of interest already exist within the not-for-profit structure, demutualization does not necessarily require a wholesale change in regulatory approach. They noted that the Exchange Act has significant safeguards to address conflicts of interest and abuses of regulatory power. Finally, in commenting on the growing trend among SROs to contract out certain regulatory services, SEC officials stressed that SROs are still legally responsible for fulfilling self-regulatory obligations that are contracted out. NASD has attempted to address concerns about conflicts of interest by reorganizing its regulatory operations and is in the process of selling its market operations. In addition, NASD and NYSE officials told us that their markets have relied on internal controls to address these concerns. SEC has used its authority under the Exchange Act to monitor the markets and address concerns about abuses of regulatory authority. In 1996, NASD created NASDR as a separate nonprofit subsidiary to address concerns related to the conflicts between NASD’s regulatory functions and market operations. Beginning in March 2000, NASD began implementing plans to sell Nasdaq to NASD members and other investors in order to limit the common ownership of Nasdaq and NASDR. In November 2000, Nasdaq filed an application with SEC to register as a national securities exchange. The planned restructuring will separate NASD and NASDR from Nasdaq and, in NASD’s view, minimize any issues related to conflicts of interest, including those related to demutualization.Under the restructuring, ECNs and other broker-dealers doing business with the public (holding customer accounts) will remain NASD members. They will continue to be regulated by NASD but will no longer be competing against an NASD-operated market. According to NASD, the restructuring will be substantially complete with the sale of NASD’s remaining Nasdaq common stock, which is expected to occur by June 2002. However, NASD will retain an interest in Nasdaq after this date. According to one ECN, the planned spin-off of Nasdaq will not fully solve the conflict of interest problem because, not only will NASD retain an interest in Nasdaq, but Nasdaq will still be NASDR’s biggest customer for its regulatory services. As such, NASDR could face a conflict between its ethical responsibility as a regulatory services provider and the economic incentive to, among other things, retain its largest revenue source. Accordingly, competitors might be concerned that NASDR will perform its regulatory services in a way that gives Nasdaq a competitive advantage.Also, because Nasdaq has applied to become an SRO as part of NASD’s plan to demutualize Nasdaq, the restructuring will not address conflicts of interest related to market-specific regulation by the new SRO. That is, as an SRO, Nasdaq will have regulatory authority over members that operate or use competing markets. In addition to adopting a structure designed to minimize conflicts between regulation and competition, NASD’s self-regulatory functions are subject to its internal controls and the oversight of SEC and the NASD and NASDR boards of directors. The boards of directors, which include public members, are intended to provide additional assurance against abuses of regulatory authority. The NASD board, to which the Nasdaq board will continue reporting until the spin-off is complete, and the NASDR board both have a majority of public members, while the Nasdaq board has an equal number of public and industry members. The boards also receive advice from various standing advisory committees. In addition, all NASD employees are required to sign a statement attesting that they will not share confidential information with any unauthorized person, inside or outside of the organization. NASD officials described other internal procedures that should minimize abuses of regulatory authority. According to NASD officials, NASD generally solicits comments from its membership and the public on regulatory rule proposals, and its board takes those comments into account before NASD files these proposals with SEC. In its disciplinary process, case initiation is governed by internal procedures that require approval from a staff body independent of NASDR enforcement and market regulation staff. After a complaint is filed, the case is heard before a three-member body that is also independent of these staff. If the matter is appealed, the appellate decision is rendered by the National Adjudicatory Council, which is made up of an equal number of industry and non-industry members. An NYSE official told us that the exchange maintains strict internal controls to address concerns about conflicts of interest between its market operations and regulatory oversight. For example, NYSE cited controls to prevent market operations staff from gaining access to information on members that has been obtained for regulatory purposes. Additionally, NYSE policy requires that regulatory staff sign a statement attesting that they will not share confidential information with market operations staff. NYSE policy statements also include details on compliance with the securities laws, including the prohibition of any unfair treatment of customers or members. NYSE’s self-regulatory functions are also subject to the oversight of SEC and the NYSE board of directors, which is intended to provide additional controls against abuses of regulatory authority. The board has 27 members—12 directors from the securities industry, 12 public directors that are independent of the securities industry, and 3 exchange officials. The board receives advice from various standing advisory committees, among them a committee comprising institutional market users. According to NYSE officials, institutional market users can voice their concerns to the board through this committee. The NYSE disciplinary process is also governed by a three-member review panel. A disciplinary decision by this panel can be appealed to the NYSE Board of Directors, which renders its decision after consultation with a special review committee whose membership is balanced between industry and non-industry members. SEC has used its authority under the Exchange Act to address concerns about abuses of regulatory authority arising from conflicts of interest, including those related to demutualization and other issues. SEC has addressed such conflicts through its oversight activities, which include reviewing and approving SRO proposals for new rules and amendments to existing rules, reviewing SRO final disciplinary proceedings, and other measures. SEC reviews SRO proposals for new rules and for amendments to existing rules to ensure that they are not anticompetitive, unfairly discriminatory, or otherwise detrimental to the markets. Section 19(b)(1) of the Exchange Act requires SROs to file copies of proposals for new rules and amendments to existing rules with SEC. Once a proposal is filed, SEC is required to publish notice of the proposal and provide an opportunity for public comment. SEC is also required, among other things, to consider the competitive effects of the rule. According to SEC, its rule reviews address the concerns of some SRO members that an SRO could abuse its authority by adopting rules that unfairly impede the ability of members to compete against the SRO. SEC officials noted, for example, that while an SRO could propose an anticompetitive or discriminatory rule, SEC would not approve it. According to officials of one ECN, SEC’s review of SRO rules, including the public comment process, has been one of the most effective ways for ECNs to have their concerns addressed. In particular, they said that SEC has addressed comments ECNs have submitted in response to SRO rule proposals. For example, ECNs expressed concerns about the anticompetitiveness of NASD’s SuperMontage proposal, and NASD, at SEC’s direction, modified the rule numerous times in an attempt to address ECN concerns. More recently, another ECN expressed concern to SEC about the competitive effects of a proposed rule that would allow Nasdaq to charge higher transaction fees to members that report less than 95 percent of their trades through Nasdaq but use Nasdaq’s quotation system or make limited use of its execution systems. The ECN was concerned, among other things, that the rule was filed under section 19(b)(3)(A) of the Exchange Act, under which such rules are effective on filing. Following discussions with SEC, NASD refiled the rule proposal under section 19(b)(2) of the Exchange Act pursuant to which it would be subject to the public comment process and SEC approval before becoming effective. According to SEC officials, SROs have withdrawn rule proposals after SEC expressed concern that the proposals might be anticompetitive. Some market participants, although agreeing that SEC’s public comment process provides a mechanism for addressing concerns about potentially anticompetitive activity by an SRO, also said that SEC lacks the resources, tools, and expertise to identify and adequately respond to all instances of anticompetitive activity by an SRO toward member competitors. According to one ECN, an SRO committed to a course of anticompetitive activity through a variety of rulemaking and rule enforcement activities may be able to achieve success, particularly in the short term, using section 19(b)(3)(A) of the Exchange Act. This ECN was concerned about the ability of an SRO to potentially obtain a significant long-term competitive advantage over its member competitors through such activities, given the quickly evolving and highly competitive nature of the securities industry. To ensure that SROs actions are not discriminatory or otherwise anticompetitive, SEC also reviews SROs’ disciplinary actions during inspections. According to SEC, these reviews address the concerns of some SRO members that an SRO could abuse its regulatory authority by sanctioning a competing member inappropriately or more severely than a noncompeting member. The Exchange Act requires SROs, in administering their affairs, to provide fair representation for members. According to SEC, the fair application of SROs’ authority to adjudicate disciplinary actions, including meting out fines and suspensions, may be particularly important, because these actions can have significant ramifications for broker-dealers. The Exchange Act provides SEC with a check on SRO disciplinary actions that are discriminatory or otherwise anticompetitive, requiring SROs that impose final disciplinary sanctions on members to also file notice with SEC. Such actions are subject to SEC’s review after appropriate notice and an opportunity for a hearing. Upon appeal, SEC must determine whether the action is consistent with the Exchange Act, SEC rules, and SRO rules and then either affirm, modify, set aside, or remand the action to the SRO for further proceedings. SEC uses additional approaches to addressing industry concerns, such as concept releases, special committees, and public hearings. For example, SEC published a concept release in December 1999 to obtain views on the fairness and reasonableness of fees charged for market information and on the role of revenues derived from such fees in funding SROs. In commenting on the release, some SRO members questioned the fairness of funding SROs, which are competitors for customer order flow, with revenues from the sale of market information. Because of the diversity of comments received and concerns raised by the concept release, SEC created an advisory committee on market information in August 2000 to provide the agency further guidance. SEC officials said they were reviewing the advisory committee’s September 2001 report and the comments received since it was issued to determine how to address concerns about market data. Some broker-dealers that were members of multiple SROs told us that differences in rules and their interpretations among SROs resulted in operational inefficiencies. While no formal process exists for ensuring consistency among rules that might cause material regulatory inefficiencies, regulatory officials said that the existing rule review and public comment process has been effective in addressing related concerns. An ongoing NASD effort could lead to the resolution of some of these concerns but regulatory cooperation will be required as NASD’s authority is limited to its own rules. In addition, some broker-dealers with multiple SRO memberships said that examinations by multiple SROs were unnecessarily burdensome. Over the years, SEC and the SROs have taken steps to improve examination efficiency, most recently through efforts to improve examination coordination. However, some broker-dealers told us that these efforts have not fully addressed their concerns. According to both market participants and regulators, SROs generally had the same or similar rules. However, some broker-dealers with multiple SRO memberships—principally NASD and NYSE memberships—were concerned that differences in rules and rule interpretations among SROs were causing operational inefficiencies. Some broker-dealers had multiple memberships because, if they were active in more than one market, they could choose to become members of the SROs operating those markets; and, if they did business with the public, they were also required to belong to NASD. Broker-dealers are subject to the regulatory oversight of each SRO to which they belong, as well as to the oversight of SEC and state securities regulators. Some broker-dealers expressed concern about inefficiencies associated with monitoring and complying with SROs’ varying rules and rule interpretations in areas such as determining what types of customer complaints to report, how long to retain certain written records, and which proficiency examinations broker-dealer employees must take and when. For example, NASD and NYSE do not use the same proficiency examinations for order takers, sales representatives, and branch managers. Further, NASD and NYSE rules and rule interpretations differ on matters such as whether order takers and sales representatives must pass the same proficiency examinations and when candidates that pass these examinations can be promoted to branch managers. According to some broker-dealers, to the extent that the skills and proficiency of order takers and sales representatives affect the quality of customer protection, these differences could result in varying levels of customer protection among firms, while at the same time, disadvantaging some firms in their ability to hire and retain staff. When discussing the overall effect of differences in rules and their interpretations with officials of several broker-dealers, they stressed that their concerns were not about the cost of one or more specific instances of differences in rules and their interpretations, but about their cumulative effect on the efficient use of compliance resources. Broker-dealers emphasized that the purpose of compliance is to protect the integrity of the markets and investors, and the effort needed to sort out compliance with multiple rules and rule interpretations strains these resources. We could not assess the overall effect of differences in rules and their interpretations because of the anecdotal nature of the information provided. While no formal process exists for addressing differences among SRO rules and interpretations that might cause material regulatory inefficiencies, SEC, NASDR, and NYSE officials told us that they have found the existing rule review and public comment process to be effective for addressing concerns about rules. According to SEC officials, SEC might use this process to try harmonizing proposed SRO rules if the agency identified significant differences or inconsistencies in them. They said that as part of the review process SEC staff ask SROs to justify any differences between a proposed rule and other SRO or SEC rules. For example, SEC officials told us that through this process they ensured that NASD and NYSE harmonized their rules on margin requirements for day traders. SEC also worked with NASD and NYSE to coordinate antimoney laundering and analyst disclosure rules. According to NYSE officials, only the reporting requirements for the money laundering rules differ. These officials also said that the exchange is working with NASD to develop uniform sales practice and margin rules for single stock futures. SEC also commented that, while the review and public comment process can address market participants’ concerns that are raised at the time a rule proposal is filed, the burdens associated with different SRO rules may not become apparent until long after the rules have been implemented. SEC officials further noted that the Exchange Act does not require that all SRO rules be uniform. They said that SROs are entitled to set whatever rules they determine are appropriate for their markets as long as the rules comply with the Exchange Act. SEC officials stressed that the agency would not impede one SRO from establishing higher standards than another, noting that many of the differing rules exist for legitimate business reasons and reflect differences in business models among markets. NYSE officials also told us that most of NYSE-listed firms that do business with the public are larger broker-dealers and that the rules imposed on larger firms are not always appropriate for smaller firms. An ongoing NASD rule modernization effort has identified differences among NASD and other SROs’ rules and could lead to the resolution of some differences. In 1998, NASD began a review to identify rules that could be repealed or modernized. In May 2001, NASD issued a notice to members stating that it intended to expand and build upon this review with the goal of ensuring that NASD rules accomplish their objectives without imposing unnecessary regulatory burdens. NASD also indicated that it was developing an ongoing process for identifying rules with regulatory costs that outweighed their benefits, including rules that were obsolete because of technological changes. The SIA’s response to the initiative discussed NASD rules that SIA concluded were inconsistent with those of other SROs and SEC. For example, SIA’s response cited an NASD rule on posting price quotations that SIA concluded was inconsistent with an SEC rule on displaying limit orders. NASD stated that it had begun the process of meeting with other regulators, including NYSE and the states, in an effort to coordinate inconsistencies among various rules. It also provided other regulators with pertinent comments received in response to its notice to members. NASD officials told us that although NASD was coordinating its modernization efforts with other regulators and hoped to eliminate inconsistencies among rules, NASD could address only its own rules. SEC and SROs have taken actions to improve the efficiency of SRO examinations of broker-dealers with multiple SRO memberships. These actions stemmed from (1) a 1976 SEC rule under which the agency assigns responsibility for conducting a broker-dealer’s financial and operational soundness examinations to a single SRO, called the designated examining authority (DEA); (2) another 1976 SEC rule that facilitated agreements among SROs to reallocate certain oversight responsibilities; and (3) a 1995 memorandum of understanding (MOU) among SEC, four SROs, and state regulators to coordinate examinations. While acknowledging that coordination efforts have improved examination efficiency, some broker- dealers said that additional improvements in efficiency are needed. In its role as an SRO, NASD (through NASDR) is to periodically examine its members’ operations every 1 to 4 years (depending on, among other things, the size of the broker-dealer). Also in its role as an SRO, NYSE is to conduct annual examinations of members that do business with the public. NASD and NYSE examinations include two types of reviews. The financial and operational review determines compliance with requirements addressing business soundness. The sales practice review determines compliance with requirements addressing, among other things, the quality of trade execution, the existence of unauthorized trading, the fairness of pricing, and fair dealings with customers, as well as compliance with market-specific rules governing member conduct and trade execution. SROs may also conduct cause or special purpose examinations as necessary to address specific problems or industry concerns. In 1976, SEC adopted rule 17d-1, under which it designates a single SRO as the DEA responsible for financial compliance examinations of individual broker-dealers that are members of multiple SROs. This rule was adopted pursuant to the Securities Act Amendments of 1975, which authorizes SEC to adopt rules to relieve SROs of the duplicative responsibility of examining their members for compliance with the Exchange Act, its rules, and SRO rules when the broker-dealer is a member of more than one SRO. However, because Rule 17d-1 relates only to financial compliance examinations, the common members of NASD and NYSE remained subject to sales practice examinations by both NASDR and NYSE. According to SEC officials, the agency selects the DEA for common members based on the market the broker-dealer uses to execute a preponderance of its customer orders or the market in which the broker- dealer has the most memberships. As of March 31, 2002, according to NYSE officials, NYSE was the DEA for about 250 broker-dealers that were also members of and subject to examination by NASD. According to NYSE, these firms represented approximately 90 percent of customer assets in the securities industry. Also in 1976, SEC adopted Rule 17d-2, which permitted SROs to establish joint plans for allocating certain regulatory responsibilities that involved their common members. Under the rule, which was also adopted as a result of the Securities Act Amendments of 1975, all plans must be filed with SEC for approval. SEC was to approve plans that, among other things, fostered cooperation and coordination among SROs. For example, SEC approved a plan in 1983 under which the American Stock Exchange, the Chicago Board Options Exchange, NASD, NYSE, the Pacific Exchange, and the Philadelphia Stock Exchange periodically rotate among themselves responsibility for options-related sales practice examinations for their common members. SEC approved other plans in the 1970s and 1980s, under which the American Stock Exchange and the regional exchanges deferred certain regulatory responsibilities of their common members to the DEA (either to NASD or NYSE). Concurrent with proposed legislation and related hearings, SEC, four SROs, and the state securities regulators entered an MOU in November 1995 to coordinate broker-dealer examinations. The MOU provided for the SROs and states (through the North American Securities Administrators Association) to meet requests from broker-dealers to coordinate specified on-site regulatory examinations. In responding to these requests, SROs were to share information and devise ways to avoid duplication. To the extent practicable, sales practice examinations conducted by the DEA and any other SROs were to be conducted simultaneously with the DEA’s financial and operational examination. Cause examinations that resulted from customer complaints or other matters were not subject to the MOU, nor were the examinations that SEC conducted to evaluate the quality of SRO oversight. However, the MOU encouraged coordination and cooperation for all examinations to the extent possible. An SEC official told us that the agency closely monitors and assesses SRO examination coordination. According to SEC and SRO officials, representatives of SEC, all SROs, and the states attend annual summits to discuss examination coordination, review examination results from the prior year, and develop plans for coordinating examinations for the coming year. In addition, regional SEC staff and SRO compliance staff are to meet quarterly to discuss and plan examination coordination, and SRO examiners are to meet monthly to plan specific examinations of common members. At these latter meetings, examiners are expected to, among other things, collaborate on fieldwork dates, document requests, and broker-dealer entrance and closeout meetings. SROs also are to share their prior examination reports before beginning fieldwork. Under the 1995 MOU, SEC agreed to maintain a computerized database to monitor examination coordination. SEC developed the criteria for coordinated examinations under the MOU as well as a database to track the number of broker-dealers that requested and received coordinated examinations. Under SEC criteria, examinations are coordinated when the SROs have at least 1 day of concurrent fieldwork at the targeted broker- dealer. An SEC official told us, however, that concurrent fieldwork was only one measure of coordination and did not completely reflect the quality of coordination. However, using this measure, SEC calculated that from 1997 through 2000 an average of 90 percent of those requesting coordinated examinations received them and that in 2000 96 percent of requestors received coordinated examinations. According to SEC officials, requests for coordinated examinations could not be honored because other scheduled examinations took longer than expected or because examiners had been reassigned to previously unscheduled cause examinations. SEC’s most recent efforts to address concerns about multiple examinations have focused on improving examination coordination. In a June 1998 report, SIA concluded that, although SEC and the SROs had made considerable progress toward improving examination coordination for broker-dealers with multiple SRO memberships, more work remained to be done to reduce duplication of efforts. In discussions with us, some broker-dealers expressed continued dissatisfaction with inefficiencies associated with multiple examinations. For example, although examinations could take a few weeks, according to some broker-dealers, when all examination steps (including both pre- and post-examination) were taken into account, firms could be subject to some part of the examination process continuously throughout the year, even with coordination. Because of the anecdotal nature of the information provided, we could not determine the extent to which multiple examinations caused inefficiencies or the extent to which efforts to address inefficiencies through improved coordination were successful. SRO data show that broker-dealers’ participation in the coordinated examination program has been declining. For example, the total number of NYSE and NASD member firms participating in the program declined from about 63 percent in 1998 to about 54 percent in 2000. According to SEC officials, these numbers do not necessarily indicate problems with the coordinated examination program, since broker-dealers opt in or out of the program for many reasons. SEC officials told us that some broker- dealers that have tried the coordinated examination program have concluded that it is more efficient for them to have two separate examinations. They said that an average of five broker-dealers participating in the coordinated examination program leave the program each year, typically because they lacked the space to accommodate the larger teams that accompany concurrent examinations or otherwise found the examinations to be disruptive to their operations. For example, some broker-dealers have concluded that it is not efficient for them to have staff with expertise in different areas of the firm’s operations (such as sales practices and finance) available to interact with examiners at the same time. SEC officials told us that they were aware of broker-dealers’ concerns about examination coordination and that these concerns had been addressed on a case-by-case basis. SEC officials stated that they often sought informal feedback from individual broker-dealers and industry trade groups and would continue to urge broker-dealers to discuss examinations and the examination process with SEC and SRO staff. SEC officials also said that in mid-2001, the agency began a pilot program to coordinate the examinations of one large broker-dealer. The pilot includes SEC, NYSE, NASDR, the Chicago Board Options Exchange, and a number of state regulators. SEC expects the program to help determine whether the agency can enhance information sharing among regulators and alleviate any burdens associated with broker-dealers being examined by multiple regulators. Securities market participants have discussed alternative approaches to self-regulation that would address, at least in part, concerns about the current self-regulatory structure. SEC officials said that the agency did not plan to dictate changes in the current structure to address these concerns but instead preferred that market participants reach a consensus on whether a need for change existed and, if so, the type of change that would be appropriate. One alternative would expand the DEA program beyond financial compliance to cover sales practices. An alternative some ECNs have discussed for addressing their concerns involves registering as exchanges and becoming SROs. Also, the broader securities industry has discussed alternatives that would more dramatically change or replace the current self-regulatory structure. These alternatives were detailed in an SIA report published in January 2000 and included consolidating responsibility for broker-dealer self-regulation and cross-market issues in a single entity not affiliated with any market (hybrid SRO model), consolidating all self-regulation—market-specific and broker-dealer—in a single entity (single SRO model), or having SEC assume all the regulatory functions currently performed by SROs (SEC-only model). At this time, none of these models appears to have the support from market participants needed for implementation. According to SEC officials, the agency does not plan to dictate changes to the regulatory structure. SEC officials told us that they believed the agency had the authority it needed to make changes but preferred that the industry reach a consensus on whether the need for change existed and, if so, what type. Additionally, they said that industry initiatives, such as Nasdaq’s application to register as an exchange, were transforming the regulatory landscape. They elaborated that if Nasdaq became an exchange, it would separate from NASD, mitigating ECN concerns about conflicts of interest. In the meantime, SEC officials said that the current self- regulatory structure had been working adequately and that immediate action was not needed. SEC noted that members could initiate improvements through their SROs, express opposition to a proposed course of action directly to the SRO, or voice their concerns to SEC. Additionally, broker-dealers could respond to proposed SRO rules both through SRO committees and during the public comment process and could also use their membership in organizations such as SIA to lobby for change. The Exchange Act provisions under which SEC assigns a single SRO as DEA with responsibility for financial compliance examinations could be amended to include sales practice examinations. The result would be that each broker-dealer would have only one examining SRO, thereby eliminating examinations by multiple SROs. However, this approach would not address the conflicts of interest that arise when SROs that operate a market regulate competitors or the differences in rules and rule interpretations among SROs. SEC opposed a provision to expand the DEA program that was included in proposed 1995 legislation. In related congressional hearings, the then SEC chairman testified that, while SROs currently monitor trading activities in their own markets, the provision would seem to require that DEAs also monitor trading in other SROs’ markets, which could be costly and significantly less effective than the current system. The chairman also pointed out that while an SRO has considerable incentive to enforce its own rules, its incentive to enforce the rules of other SROs might not be as strong. He stated that requiring an SRO to enforce the rules of another SRO would be inconsistent with section 19(g) of the Exchange Act, under which each SRO is to enforce compliance with its own rules. Some market participants have also discussed a proposal that would allow broker-dealers, rather than SEC, to select their DEAs. NASD officials were concerned that this proposal could threaten NASD’s ability to provide affordable regulatory services to small firms. NASD officials said that, under this proposal, the large broker-dealers might select NYSE as their DEA, while the small ones might select NASD. NASD would then lose the revenue from large broker-dealers that currently subsidizes the cost of regulatory services for smaller broker-dealers. For example, according to NASD officials, the smallest NASD member pays $600 in annual fees, but the average examination for such a broker-dealer costs from $7,000 to $10,000. According to NASD officials, allowing broker-dealers to select their DEAs could threaten the existence of NASD and thousands of small broker-dealers. An ECN or other alternative trading system could become an SRO by registering as an exchange and in doing so would avoid regulation by a competing SRO. Having each ECN become an SRO would reduce conflicts of interest that can arise when SROs that operate a market regulate ECNs. However, this alternative would not address the regulatory inefficiencies that result from broker-dealers having multiple SRO memberships. Three ECNs—Island, Archipelago, and NexTrade—have explored becoming securities exchanges, although no formal filings are currently before SEC. Archipelago has since become a facility of the Pacific Exchange. NASD officials expressed a general concern that, if SROs proliferate, regulatory information would be reported to different regulators without adequate coordination. Because no one regulator would see all relevant information, abuses could continue undetected. They were further concerned that competition among regulators—to be distinguished from competition among markets—could lead to a race to the lowest regulatory standards and undermine investor confidence in the securities markets. Other market participants have observed that by marketing the quality of their services to potential clients, competing regulators could create higher regulatory standards. One ECN emphasized that SEC’s existing SRO oversight programs focus on assessing whether regulatory service providers meet acceptable levels of performance. The SIA report endorsed replacing the current self-regulatory structure with the hybrid SRO model, a proposal that was discussed in the early 1970s. Under the hybrid SRO model, a single entity unaffiliated with any market would be created to assume responsibility for broker-dealer oversight and cross-market rules, including those related to sales practices, industry admissions, financial responsibility, and cross-market trading. Individual SROs would remain responsible for market-specific rules such as those related to listings, governance, and market-specific trading. Although some SIA members said it was premature to revamp the current regulatory structure, the majority supported the hybrid SRO model because they believed that it would reduce member-related conflicts of interest and SRO inefficiencies. According to SIA, potential conflicts of interest would be reduced because the new SRO would not be affiliated with a competing market. Eliminating duplicative SRO examinations would reduce inefficiencies in areas such as rulemaking, examinations, and staffing. SEC officials agreed that consolidating member regulation into one SRO was an advantage of the hybrid SRO model. They noted that the industry was moving toward a hybrid model as Nasdaq separated from NASD and NASD contracted to provide regulatory services to more SROs. Although NASD officials told us that they did not have an official position on the hybrid SRO model, NASD has supported the concept of separating market-specific and member regulation in the past. In February 2000 testimony, the then NASD chairman noted that NASD’s separation of Nasdaq and NASDR is the first step toward “the right regulatory model: the hybrid SRO model.” In stating its opposition to self-regulatory changes, the NYSE chairman said that spinning off NYSE regulation into an unaffiliated regulatory entity would weaken investor protection and do irreparable harm to the NYSE brand name. He noted that funding a separate regulatory body independent of the exchange would eliminate economic efficiencies and synergies that result from the integration of regulation into the NYSE market as a whole. NYSE officials told us that because the hybrid model separates member from market-specific regulation, the hybrid regulator’s examinations would not review the operations of the entire broker-dealer and thus would be less effective than examinations conducted under the current regulatory approach. NYSE officials also said that the exchange had postponed its plan to demutualize for several reasons, including concern that such action might have had the negative consequence of forcing NYSE to separate its regulatory and market functions. SIA agreed that the disadvantages of the hybrid SRO model included the model’s inability to address market-specific conflicts of interest. SIA and others concluded, however, that the advantage of having personnel with specialized knowledge overseeing market operations outweighed this disadvantage. According to the SIA report, SIA attempted to gather data showing that the hybrid SRO model would be a cost-effective approach to self-regulation. However, it was unable to obtain the data it needed from the SROs. In the absence of active support from NYSE and SEC for the model, SIA is not currently pursuing it as a means of addressing market participants’ concerns about conflicts of interest and regulatory inefficiencies. The SIA report also discussed the single-SRO model as a means of addressing concerns about both conflicts of interest and regulatory inefficiencies. Under this model, a single SRO would be vested with responsibility for all regulatory functions currently performed by the SROs, including market-specific and broker-dealer regulation. According to SIA, the single SRO model could eliminate the conflicts of interest and regulatory inefficiencies associated with multiple SROs, including those that would remain under the hybrid SRO model. However, SIA did not endorse this alternative, primarily because of the risk that self-regulation would become too far removed from the functioning of the markets—a point of view that was similar to NYSE’s comments on the hybrid model. In addition, and in contrast to broker-dealer regulation, SEC officials said that it might not be appropriate or feasible to give a single SRO responsibility for surveilling all the markets because of differences in the way trades are executed in each. That is, Nasdaq, NYSE, and other markets have different rules that reflect their different ways of executing trades. SEC has taken the position that SROs should continue to have ultimate responsibility for enforcing rules unique to the SRO or relating to transactions executed in the SRO’s market. Market operators have generally shared this view. The SEC-only model would address concerns about conflicts of interest and regulatory inefficiencies by eliminating all self-regulation. Under this model, SEC would assume all the regulatory functions currently performed by SROs. Under a variation of this alternative that is not discussed in the SIA report, SEC would assume just NASD’s obligation to regulate ECNs and other alternative trading systems. SIA did not endorse the SEC-only model because doing so would eliminate self-regulation of the securities industry, taking with it the expertise that market participants contribute. SIA also expected the SEC-only model to be more expensive and bureaucratic, because implementing it would require additional SEC staff and mechanisms to replace SRO regulatory staff and processes. In addition, according to the report and SEC, a previous SEC attempt at direct regulation was not successful, owing to its high cost and low quality (relative to self-regulation), convincing SEC and other market participants that it was not a feasible regulatory approach. As competition continues to drive the evolution of the securities markets, concerns about the conflicts of interest inherent in the current self- regulatory structure have grown in importance. Such concerns, if not effectively addressed, could undermine the cooperative nature of self- regulation and erode confidence in the fairness of the securities markets. As a result, an ongoing challenge for SEC and the SROs will be to respond effectively to both real and perceived conflicts of interest. The extent of the regulatory burden generated by differences in SROs’ rules and their interpretation and by multiple examinations of broker- dealers is unknown. Obtaining a better understanding of related concerns could help address the dissatisfaction some broker-dealers have expressed with the current self-regulatory structure. For example, differences in rules and their interpretations have been used to justify the need for multiple examinations. As a result, the success of efforts to address concerns about multiple examinations could be related to how concerns about differences in rules are addressed. To improve its understanding of broker-dealers’ concerns, SEC could work with NASD, NYSE, and other market participants to identify and address differences in rules that might cause material inefficiencies in the regulatory process. SEC could also work with these market participants and through its ongoing pilot program to better assess whether further improvements in examination coordination could address the most significant problems associated with multiple examinations of broker-dealers. As part of these efforts, SEC could instruct the SROs to provide the agency with formal assessments of broker-dealers’ satisfaction with the coordinated examination program, including determining why some broker-dealers choose not to participate and why others terminate their participation, and of market participants’ specific concerns about rules. For example, a survey that is representative of broker-dealers and that is administered by a neutral party could be used to determine the nature and extent of concerns about rules and examinations. Such information might also be useful to SEC and the industry in assessing the effectiveness of the current regulatory structure. Some broker-dealers and market participants believe that the concerns raised by changes in the markets warrant further examination of alternatives for revising the self-regulatory structure. In contrast, SEC has observed that the regulatory landscape is in the process of transformation and that, thus far, the current self-regulatory structure has been working adequately. Without additional SEC and industry support, major changes are not expected. We recommend that the chairman, SEC, work with the SROs and broker- dealer representatives to implement a formal process for systematically identifying and addressing material regulatory inefficiencies caused by differences in rules or rule interpretations among SROs and by multiple examinations of broker-dealers. In doing so, we recommend that SEC explore with the SROs and other market participants various methods for obtaining comprehensive feedback from market participants, such as having the SROs use a neutral party to independently collect and assess market participants’ views. We requested comments on a draft of this report from the heads, or their designees, of SEC, NASD, Nasdaq, NYSE, SIA, and three ECNs. We received written comments from SEC, NASD, Nasdaq, and SIA that are summarized below and reprinted in appendixes I through IV. In addition, we received oral comments from the general counsel of one ECN on March 18, 2002; they are also summarized below. Finally, we received technical comments from SEC, NASD, NYSE, SIA, and a second ECN that are incorporated into the report as appropriate. The third ECN did not provide comments. The respondents generally agreed with the conclusions and recommendations in the draft report, however, three respondents expressed additional concerns. SEC officials endorsed our recommendations and indicated that the agency would work closely with NASD and NYSE to implement them. NASD, which also agreed with our recommendations, highlighted its efforts to resolve issues caused by differences in rules or rules interpretations through it rule modernization project. NASD noted that its authority is limited to addressing NASD rules and cited the importance of SEC participation to further efforts to reduce inconsistencies in rules. Nasdaq commented that the draft report generally provided an accurate characterization both of the debate about conflicts of interest between the primary SROs—NYSE and Nasdaq—and their respective markets and of some of the steps that are being taken to mitigate those conflicts. However, Nasdaq also said that the report largely overlooked a serious challenge to the integrity of the self-regulatory system—that is, the alignment of regional stock exchanges with ECNs for trading Nasdaq stocks. Nasdaq commented that these alignments have copied Nasdaq’s “competing dealer” market structure without also adopting the safeguards necessary to regulate such a market. While this issue may deserve additional attention, our report focused on concerns about potential abuses of regulatory authority by SROs that regulate members against which they compete for order flow rather than on the broader issues of competition among markets or the quality of self-regulation SROs provide. The draft report did note that SEC assesses the quality of all the SROs’ regulatory programs, which includes those of the regional exchanges. It also stated that the concerns addressed were identified through a variety of means, including discussions with Nasdaq officials and other market participants, and that they did not represent all existing concerns. SIA agreed with the report’s conclusions and recommendations but also expressed concern that SROs often file rule changes with SEC without prior public notice or opportunity for comment. As a result, affected firms learn of proposed rule changes only when the rules are published for comment in the Federal Register. SIA expressed a similar concern about rule interpretations or clarifications that inadvertently impose new substantive obligations on members, noting that SROs also issue these changes without any public notice or opportunity for comment. Accordingly, SIA suggested that market participants be engaged at the outset of the regulatory dialogue in order to produce more balanced, resource-efficient regulation. We recognize that the need for public comment must be balanced against the need for SROs to expeditiously implement rules that can affect their competitiveness and that SEC and the industry have been attempting to balance these sometimes conflicting demands. To the extent that the timing of the public comment process is a factor causing differences in rules and their interpretations, this issue could be explored as part of SEC’s and the industry’s efforts to implement our recommendations. The ECN that provided oral comments on the draft report focused on concerns about conflicts of interest in the self-regulatory structure as SROs increasingly compete with the members they regulate. The ECN commented that the report did not capture the full extent of the “dysfunction” and competitive conflict in the current self-regulatory structure, emphasizing its concern that ECNs had no viable alternative to being regulated by a competitor. The final report includes some additional information the ECNs provided in response to the draft that further illustrates the nature of their concerns. To review how SEC, NASD, and NYSE are addressing concerns about (1) the impact of increased competition, including demutualization, on the ability of SROs to effectively regulate members with which they compete and (2) possible regulatory inefficiencies associated with broker-dealer membership in multiple SROs, we reviewed relevant securities laws and SRO rules, SEC concept releases and studies, SEC and SRO proposed rule changes, an NASDR rule modernization notice, industry and academic studies and research papers, and articles in academic and industry publications. We also reviewed comment letters received on releases and proposals published in the Federal Register. In addition, we interviewed officials of two federal agencies (the Commodity Futures Trading Commission and SEC); three SROs (NASD (including Nasdaq and NASDR), the National Futures Association, and NYSE); three ECNs; the Arizona Stock Exchange; two industry associations (SIA and the Investment Company Institute); three investment companies that manage mutual funds or pension funds; eight registered broker-dealers (in addition to the three ECNs); and two industry experts. We also identified the concerns that are addressed in the report through these document reviews and interviews. As a result, the concerns identified do not necessarily represent all those that exist. Our review focused on the two largest SROs in the equities markets—NASD and NYSE—because concerns related to the dual role of SROs as market operators and regulators applied primarily to these SROs. They were also the SROs that were the subject of concerns about the efficiency of SRO rules and examinations affecting members that belong to multiple SROs. Our review focused primarily on the securities markets because the issues that have arisen in these markets have not yet surfaced to the same extent in other markets. To describe alternative approaches that some securities market participants have discussed as a means of addressing concerns about the current self-regulatory structure, we reviewed industry and academic studies and research papers, articles in academic or industry publications, and congressional hearing records. We discussed the alternatives identified with the officials cited above. We did our work in Chicago, IL; New York, NY; and Washington, D.C., between October 2000 and March 2002 in accordance with generally accepted government auditing standards. We will send copies of this report to other interested congressional committees. We will also send copies to the chairman of SEC, chairmen and chief executive officers of NASD and Nasdaq, president of NASDR, chairman and chief executive officer of NYSE, chairman and president of SIA, and the three ECNs. Copies will be made available to others upon request. For any questions regarding this report please, contact me at (202) 512-8678, hillmanr@gao.gov, or Cecile Trop, Assistant Director, at (312) 220-7705, tropc@gao.gov. Key contributors include Roger Kolar, Melvin Thomas, Sindy Udell, and Emily Chalmers. | In the securities markets, competition among self-regulatory organizations (SRO) and their members for customer orders has heightened concerns about conflicts of interest in their roles as both market operators and regulators. Nasdaq--the market run by the National Association of Securities Dealers (NASD)--has been in competition with NASD members that run electronic communications networks. For years, the New York Stock Exchange (NYSE) has faced competition from members that trade NYSE-listed securities off of the exchange. Greater competition has generated concern that an SRO might abuse its regulatory authority--for example, by imposing rules or disciplinary actions that are unfair to the competitors it regulates. Some broker-dealers subject to the jurisdiction of multiple SROs also are concerned that differences among SRO rules and rule interpretations have caused inefficiencies in the use of broker-dealers' compliance resources. No formal process exists, however, for addressing rule differences that might cause material inefficiencies in the regulatory process. The law does not require SRO rules to be the same, and many differences exist for legitimate business reasons according to regulators. Broker-dealers with multiple SRO memberships said that examinations by multiple SROs were unnecessarily burdensome. Securities market participants have discussed alternatives that would address concerns about conflicts of interest and inefficiencies in the current self-regulatory structure. Securities and Exchange Commission officials said that they had no plans to change the current structure, preferring to let the industry reach a consensus on the need for appropriate change. |
A core function of privacy officers is to ensure that their agencies are in compliance with federal laws. The major requirements for th protection of personal privacy by federal agencies come from two laws, the Privacy Act of 1974 and the E-Government Act of 2002. The Federal Information Security Management Act of 2002 (FISM A) also addresses the protection of personal information in the context of securing federal agency information and information systems. The Privacy Act places limitations on agencies’ collection, disclosure, and use of personal information maintained in s ystems of records. The act describes a “record” as any item, collection, or grouping of information about an individual that is maintained by a agency and contains his or her name or another personal identifier. It also defines “system of records” as a group of records under the control of any agency from which information is retrieved by the n name of the individual or by an individual identifier. The Privacy A requires that when agencies establish or make changes to a system of records, they must notify the public by a “system-of-records notice”: that is, a notice in the Federal Register identifying, amo other things, the type of data collected, the types of individuals about whom information is collected, the intended “routine” use data, and procedures that individuals can use to review and correct personal information. Among other provisions, the act also requires agencies to define and limit themselves to specific predefined purposes. For example, the act requires that to the greatest ext ent practicable, personal information should be collected directly from the subject individual when it may affect an individual’s rights or benefits under a federal program. The provisions of the Privacy Act are largely based on a set of principles for protecting the privacy and security of personal information, known as the Fair Information Practices, which w first proposed in 1973 by a U.S. government advisory committee; these principles were intended to address what the committee termed a poor level of protection afforded to privacy under contemporary law. Since that time, the Fair Information Pra ctices have been widely adopted as a standard benchmark for evaluating the adequacy of privacy protections. Attachment 2 contains a summary of the widely used version of the Fair Information Practices adopted by the Organization for Economic Coopera and Development in 1980. The E-Government Act of 2002 strives to enhance protection for personal information in government information systems or information collections by requiring that agencies conduct pr impact assessments (PIA). A PIA is an analysis of how personal information is collected, stored, shared, and managed in a federa l system. More specifically, according to Office of Management and Budget (OMB) guidance, a PIA is an analysis of how information ishandled. Specifically, a PIA is to (1) ensure that handling conforms to applicable legal, regulatory, and policy requirements regarding privacy; (2) determine the risks and effects of collecting, maintaining, and disseminating information in identifiable an electronic information system; and (3) examine and evaluate protections and alternative processes for handling information to mitigate potential privacy risks. Agencies must conduct PIAs (1) before developing or procuring information technology that collects, maintains, or disseminates information that is in a personally identifiable form; or (2) before initiating any new data collections involving personal information that will be collected, maintained, or disseminated using information technology if the same questions are asked of 10 or more people. To the extent that PIAs are made publicly available they provide explanations to the public about such things as the information that will be collected, why it is being collected, how to be used, and how the system and data will be maintained and protected. FISMA also addresses the protection of personal information. FISMA defines federal requirements for securing information a information systems that support federal agency operations and assets; it requires agencies to develop agencywide information security programs that extend to contractors and other provider federal data and systems. Under FISMA, information security means protecting information and information systems from unauthorized access, use, disclosure, disruption, modification destruction, including controls necessary to preserve authorized , or restrictions on access and disclosure to protect personal privacy, among other things. OMB is tasked with providing guidance to agencies on how to implement the provisions of the Privacy Act and the E-Government Act and has done so, beginning with guidance on the Privacy Act, issued in 1975. The guidance provides explanations for the various provisions of the law as well as detailed instructions for how to comply. OMB’s guidance on implementing the privacy provisions of the E-Government Act of 2002 identifies circumstances under which agencies must conduct PIAs and explains how to conduct them. OMB has also issued guidance on implementing the provisions of FISMA. While many agencies have had officials designated as focal points for privacy-related matters for some time, these positions have recently gained greater prominence at a number of agencies. A long- standing requirement has been in place for agency chief information officers to be responsible for implementing and enforcing privacy policies, procedures, standards, and guidelines, and for compliance with the Privacy Act. In 2004, we reported that of the 27 major agency chief information officers, 17 were responsible for privacy and 10 were not. In those 10 agencies, privacy was most often the responsibility of the Office of General Counsel and/or various offices focusing on compliance with the Freedom of Information Act and the Privacy Act. Steps have been taken recently to highlight the importance of privacy officers in federal agencies. For example, the Transportation, Treasury, Independent Agencies, and General Government Appropriations Act of 2005 required each agency covered by the act to have a chief privacy officer responsible for, among other things, “assuring that the use of technologies sustain, and do not erode, privacy protections relating to the use, collection, and disclosure of information in identifiable form.” Subsequently, in February 2005, OMB issued a memorandum to federal agencies requiring them to designate a senior official with overall agencywide responsibility for information privacy issues. This senior official was to have overall responsibility and accountability for ensuring the agency’s implementation of information privacy protections and play a central policy-making role in the agency’s development and evaluation of policy proposals relating to the agency’s collection, use, sharing, and disclosure of personal information. Prior to the OMB guidance, several agencies had already designated privacy officials at higher levels. The Internal Revenue Service had been one of the first, establishing its privacy advocate in 1993. In 2001, the Postal Service established a Chief Privacy Officer. More recently, as you know, Section 222 of the Homeland Security Act of 2002 had created the first statutorily required senior privacy official at any federal agency. This law mandated the appointment of a senior official at DHS to assume primary responsibility for privacy policy, including, among other things, assuring that the use of technologies sustains, and does not erode, privacy protections relating to the use, collection, and disclosure of personal information. Since being established, the DHS Privacy Office created a Data Privacy and Integrity Advisory Committee, made up of experts from the private and non-profit sectors and the academic community, to advise it on issues within DHS that affect individual privacy, as well as data integrity, interoperability, and other privacy- related issues. Through the Intelligence Reform Act in 2004, Congress expressed more broadly the sense that agencies with law enforcement or anti- terrorism functions should have a privacy and civil liberties officer. In keeping with that, Justice recently announced the appointment of a Chief Privacy and Civil Liberties Officer responsible for reviewing and overseeing the department’s privacy operations and complying with privacy laws. Justice has also announced plans to establish an internal Privacy and Civil Liberties Board made up of senior Justice officials to assist in ensuring that the department’s activities are carried out in a way that fully protects the privacy and civil liberties of Americans. The elevation of privacy officers at federal agencies reflects the growing demands that these individuals face in addressing privacy challenges on a day-to-day basis. Among these challenges, several that are prominent include (1) complying with the Privacy Act and the E-Government Act of 2002, (2) ensuring that data mining efforts do not compromise privacy protections, (3) controlling the collection and use of personal information obtained from commercial sources, and (4) addressing concerns about radio frequency identification technology. Although it has been on the books for more than 30 years, the Privacy Act of 1974 continues to pose challenges for federal agencies. In 2003, we reported that agencies generally did well with certain aspects of the Privacy Act’s requirements—such as issuing system-of-records notices when required—but did less well at other requirements, such as ensuring that information is complete, accurate, relevant, and timely before it is disclosed to a nonfederal organization. In discussing this uneven compliance, agency officials reported the need for additional OMB leadership and guidance to assist in difficult implementation issues in a rapidly changing environment. For example, officials had questions about the act’s applicability to electronic records. Additional issues included the low agency priority given to implementing the act and insufficient employee training on the act. These are all issues that chief privacy officers could be in a position to address. For example, working in concert with officials from OMB and other agencies, they are in a position to identify ambiguities in guidance and provide clarifications about the applicability of the Privacy Act. Further, the establishment of a chief privacy officer position and its relative seniority within an agency’s organizational structure could indicate that an agency places priority on implementing the act. Finally, a chief privacy officer could also serve as a champion for privacy awareness and education across an agency. The E-Government Act’s requirement that agencies conduct PIAs is relatively recent, and we have not yet made a comprehensive assessment of agencies’ implementation of this important provision. However, our previous work has highlighted challenges with respect to conduct of these assessments for certain applications. For example, in our work on federal agency use of information resellers, we found that few agency components reported developing PIAs for their systems or programs that make use of information reseller data. These agencies often did not conduct PIAs because officials did not believe they were required. Current OMB guidance on conducting PIAs is not always clear about when they should be conducted. We concluded that until PIAs are conducted more thoroughly and consistently, the public is likely to remain incompletely informed about the purposes and uses for the information agencies obtain from resellers. We recommended that OMB revise its guidance to clarify the applicability of the E-Gov Act’s PIA requirement (as well as Privacy Act requirements) to the use of personal information from resellers. Compliance with OMB’s PIA guidance was also an issue in our review of selected data mining efforts at federal agencies. In that review, although three of the five data mining efforts we assessed had conducted PIAs, none of these assessments fully complied with OMB guidance. Complete assessments are an important tool for agencies to identify areas of noncompliance with federal privacy laws, evaluate risks arising from electronic collection and maintenance of information about individuals, and evaluate protections or alternative processes needed to mitigate the risks identified. Agencies that do not take all the steps required to protect the privacy of personal information limit the ability of individuals to participate in decisions that affect them, as required by law, and risk the improper exposure or alteration of personal information. We recommended that the agencies responsible for the data mining efforts complete or revise PIAs as needed and make them available to the public. The DHS Privacy Office recently issued detailed guidance on conducting PIAs that may be helpful to departmental components as they develop and implement systems that involved personal information. The guidance notes that PIAs can be one of the most important instruments in establishing trust between the department and the public. As agencies develop or make changes to existing systems that collect personally identifiable information, it will continue to be critical for privacy officers to monitor agency activities and help ensure that PIAs are properly conducted so that their benefits can be realized. Many concerns have been raised about the potential for data mining programs at federal agencies to compromise personal privacy. In our May 2004 report on federal data mining efforts, we defined data mining as the application of database technology and techniques— such as statistical analysis and modeling—to uncover hidden patterns and subtle relationships in data and to infer rules that allow for the prediction of future results. We based this definition on the most commonly used terms found in a survey of the technical literature. As we noted in our report, mining government and private databases containing personal information raises a range of privacy concerns. In the government, data mining was initially used to detect financial fraud and abuse. However, its use has greatly expanded. Among other purposes, data mining has been used increasingly as a tool to help detect terrorist threats through the collection and analysis of public and private sector data. Through data mining, agencies can quickly and efficiently obtain information on individuals or groups by exploiting large databases containing personal information aggregated from public and private records. Information can be developed about a specific individual or a group of individuals whose behavior or characteristics fit a specific pattern. The ease with which organizations can use automated systems to gather and analyze large amounts of previously isolated information raises concerns about the impact on personal privacy. Before data aggregation and data mining came into use, personal information contained in paper records stored at widely dispersed locations, such as courthouses or other government offices, was relatively difficult to gather and analyze. In August 2005, we reported on five different data mining efforts at selected federal agencies, noting that although the agencies responsible for these data mining efforts took many of the steps needed to protect the privacy and security of personal information used in the efforts, none followed all key procedures. Most of the agencies provided a general public notice about the collection and use of the personal information used in their data mining efforts. However, fewer followed other required steps, such as notifying individuals about the intended uses of their personal information when it was collected or ensuring the security and accuracy of the information used in their data mining efforts. In addition, as I previously mentioned, although three of the five agencies completed privacy impact assessments of their data mining efforts, none fully complied with OMB guidance. We made recommendations to the agencies responsible for the five data mining efforts to ensure that their efforts included adequate privacy and security protections. In March 2004, an advisory committee chartered by the Department of Defense issued a comprehensive report on privacy concerns regarding data mining in the fight against terrorism. The report made numerous recommendations to better ensure that privacy requirements are clear and stressed that proper oversight be in place when agencies engage in data mining that could include personal information. Agency privacy offices can provide a degree of internal oversight to help ensure that privacy is fully addressed in agency data mining activities. Recent security breaches at large information resellers, such as ChoicePoint and LexisNexis, have highlighted the extent to which such companies collect and disseminate personal information. Information resellers are companies that collect information, including personal information about consumers, from a wide variety of sources for the purpose of reselling such information to their customers, which include both private-sector businesses and government agencies. Before advanced computerized techniques made aggregating and disseminating such information relatively easy, much personal information was less accessible, being stored in paper-based public records at courthouses and other government offices or in the files of nonpublic businesses. However, information resellers have now amassed extensive amounts of personal information about large numbers of Americans, and federal agencies access this information for a variety of reasons. A major task confronting federal agencies, especially those engaged in antiterrorism tasks, has been to ensure that information obtained from resellers is being appropriately used and protected. To this end, in September 2005, the DHS Privacy Office held a public workshop to examine the policy, legal, and technology issues associated with the government’s use of reseller data for homeland security. Participants provided suggestions on how the government can ensure that privacy is protected while enabling the agencies to analyze reseller data. We recently testified before this subcommittee on critical issues surrounding the federal government’s acquisition and use of personal information from information resellers. In our review of the acquisition of personal information from resellers by DHS, Justice, the Department of State, and the Social Security Administration, agency practices for handling this information did not always reflect the Fair Information Practices. For example, although agencies issued public notices on information collections, these did not always notify the public that information resellers were among the sources to be used, a practice inconsistent with the principle that individuals should be informed about privacy policies and the collection of information. And again, a contributing factor was ambiguities in guidance from OMB regarding the applicability of privacy requirements in this situation. As I mentioned previously, we recommended that OMB revise its guidance to clarify the applicability of governing laws—both the Privacy Act and the E-Gov Act—to the use of personal information from resellers. In July 2005, we reported on shortcomings at DHS’s Transportation Security Administration (TSA) in connection with its test of the use of reseller data for the Secure Flight airline passenger screening program. TSA did not fully disclose to the public its use of personal information in its fall 2004 privacy notices, as required by the Privacy Act. In particular, the public was not made fully aware of, nor had the opportunity to comment on, TSA’s use of personal information drawn from commercial sources to test aspects of the Secure Flight program. In September 2004 and November 2004, TSA issued privacy notices in the Federal Register that included descriptions of how such information would be used. However, these notices did not fully inform the public before testing began about the procedures that TSA and its contractors would follow for collecting, using, and storing commercial data. In addition, the scope of the data used during commercial data testing was not fully disclosed in the notices. Specifically, a TSA contractor, acting on behalf of the agency, collected more than 100 million commercial data records containing personal information such as name, date of birth, and telephone number without informing the public. As a result of TSA’s actions, the public did not receive the full protections of the Privacy Act. In its comments on our findings, DHS stated that it recognized the merits of the issues we raised, and that TSA acted immediately to address them. In our report on information resellers, we recommended that the Director, OMB, revise privacy guidance to clarify the applicability of requirements for public notices and privacy impact assessments to agency use of personal information from resellers and direct agencies to review their uses of such information to ensure it is explicitly referenced in privacy notices and assessments. Further, we recommended that agencies develop specific policies for the use of personal information from resellers. Until privacy requirements are better defined and broadly understood, agency privacy officers are likely to continue to face challenges in helping ensure that their agencies are providing appropriate privacy protections. Specific issues about the design and content of identity cards also raise broader privacy concerns associated with the adoption of new technologies such as radio frequency identification (RFID). RFID is an automated data-capture technology that can be used to electronically identify, track, and store information contained on a tag. The tag can be attached to or embedded in the object to be identified, such as a product, case, or pallet. RFID technology provides identification and tracking capabilities by using wireless communication to transmit data. In May 2005, we reported that major initiatives at federal agencies that use or propose to use the technology included physical access controls and tracking assets, documents, or materials. For example, DHS was using RFID to track and identify assets, weapons, and baggage on flights. The Department of Defense was also using it to track shipments. In our May 2005 report we identified several privacy issues related to both commercial and federal use of RFID technology. Among these privacy issues are notifying individuals of the existence or use of the technology; tracking an individual’s movements; profiling an individual’s habits, tastes, or predilections; and allowing for secondary uses of information. The extent and nature of the privacy issues depends on the specific proposed use. For example, using the technology for generic inventory control would not likely generate substantial privacy concerns. However, the use of RFIDs by the federal government to track the movement of individuals traveling within the United States could generate concern by the affected parties. A number of specific privacy issues can arise from RFID use. For example, individuals may not be aware that the technology is being used and that it could be embedded in items they are carrying and thus used to track them. Three agencies indicated to us that employing the technology would allow for the tracking of employees’ movements. Tracking is real-time or near-real-time surveillance in which a person’s movements are followed through RFID scanning. Media reports have described concerns about ways in which anonymity is likely to be undermined by surveillance. Further, public surveys have identified a distinct unease with the potential ability of the federal government to monitor individuals’ movements and transactions. Like tracking, profiling—the reconstruction of a person’s movements or transactions over a specific period of time, usually to ascertain something about the individual’s habits, tastes, or predilections—could also be undertaken through the use of RFID technology. Because tags can contain unique identifiers, once a tagged item is associated with a particular individual, personally identifiable information can be obtained and then aggregated to develop a profile of the individual. Both tracking and profiling can compromise an individual’s privacy and anonymity. Concerns also have been raised that organizations could develop secondary uses for the information gleaned through RFID technology; this has been referred to as “mission-” or “function- creep.” The history of the Social Security number, for example, gives ample evidence of how an identifier developed for one specific use has become a mainstay of identification for many other purposes, governmental and nongovernmental. Secondary uses of the Social Security number have been a matter not of technical controls but rather of changing policy and administrative priorities. As agencies take advantage of the benefits of RFID technology and implement it more widely, it will be critical for privacy officers to help ensure that a full consideration is made of potential privacy issues, both short-term and long-term, as the technology is implemented. In summary, privacy officers at federal agencies face a range of challenges in working to ensure that individual privacy is protected, and today I have discussed several of them. It is clear that advances in technology can present both opportunities for greater agency efficiency and effectiveness as well as the danger, if unaddressed, of eroding important privacy protections. Technological advances also mean there is a need to keep governmentwide privacy guidance up- to-date, and agency privacy officers will depend on OMB for leadership in this area. Even without a consideration of technological evolution, privacy officers need to be vigilant to ensure that agency officials are continually mindful of their privacy responsibilities. Fortunately, tools are available—including the requirements for PIAs and Privacy Act public notices—that can help ensure that the right operational decisions are made about the acquisition, use, and storage of personal information. By using these tools effectively, agencies have the opportunity to gain greater public confidence that their actions are in the best interests of all Americans. Mr. Chairman, this concludes my testimony today. I would happy to answer any questions you or other members of the subcommittee may have. If you have any questions concerning this testimony, please contact Linda Koontz, Director, Information Management, at (202) 512-6240, or koontzl@gao.gov. Other individuals who made key contributions include Barbara Collier, John de Ferrari, David Plocher, and Jamie Pressman. Personal Information: Agencies and Resellers Vary in Providing Privacy Protections. GAO-06-609T. Washington, D.C.: April 4, 2006. Personal Information: Agency and Reseller Adherence to Key Privacy Principles. GAO-06-421. Washington, D.C.: April 4, 2006. Data Mining: Agencies Have Taken Key Steps to Protect Privacy in Selected Efforts, but Significant Compliance Issues Remain. GAO- 05-866. Washington, D.C.: August 15, 2005. Aviation Security: Transportation Security Administration Did Not Fully Disclose Uses of Personal Information during Secure Flight Program Testing in Initial Privacy Notices, but Has Recently Taken Steps to More Fully Inform the Public. GAO-05- 864R. Washington, D.C.: July 22, 2005. Identity Theft: Some Outreach Efforts to Promote Awareness of New Consumer Rights are Under Way. GAO-05-710. Washington, D.C.: June 30, 2005. Information Security: Radio Frequency Identification Technology in the Federal Government. GAO-05-551. Washington, D.C.: May 27, 2005. Aviation Security: Secure Flight Development and Testing Under Way, but Risks Should Be Managed as System is Further Developed. GAO-05-356. Washington, D.C.: March 28, 2005. Electronic Government: Federal Agencies Have Made Progress Implementing the E-Government Act of 2002. GAO-05-12. Washington, D.C.: December 10, 2004. Social Security Numbers: Governments Could Do More to Reduce Display in Public Records and on Identity Cards. GAO-05-59. Washington, D.C.: November 9, 2004. Federal Chief Information Officers: Responsibilities, Reporting Relationships, Tenure, and Challenges, GAO-04-823. Washington, D.C.: July 21, 2004. Data Mining: Federal Efforts Cover a Wide Range of Uses, GAO-04- 548. Washington, D.C.: May 4, 2004. Aviation Security: Computer-Assisted Passenger Prescreening System Faces Significant Implementation Challenges. GAO-04-385. Washington, D.C.: February 12, 2004. Privacy Act: OMB Leadership Needed to Improve Agency Compliance. GAO-03-304. Washington, D.C.: June 30, 2003. Data Mining: Results and Challenges for Government Programs, Audits, and Investigations. GAO-03-591T. Washington, D.C.: March 25, 2003. Technology Assessment: Using Biometrics for Border Security. GAO-03-174. Washington, D.C.: November 15, 2002. Information Management: Selected Agencies’ Handling of Personal Information. GAO-02-1058. Washington, D.C.: September 30, 2002. Identity Theft: Greater Awareness and Use of Existing Data Are Needed. GAO-02-766. Washington, D.C.: June 28, 2002. Social Security Numbers: Government Benefits from SSN Use but Could Provide Better Safeguards. GAO-02-352. Washington, D.C.: May 31, 2002. The Fair Information Practices are not precise legal requirements. Rather, they provide a framework of principles for balancing the need for privacy with other public policy interests, such as national security, law enforcement, and administrative efficiency. Ways to strike that balance vary among countries and according to the type of information under consideration. The version of the Fair Information Practices shown in table 1 was issued by the Organization for Economic Cooperation and Development (OECD) in 1980 and has been widely adopted. | Advances in information technology make it easier than ever for the federal government to obtain and process personal information about citizens and residents in many ways and for many purposes. To ensure that the privacy rights of individuals are respected, this information must be properly protected in accordance with current law, particularly the Privacy Act and the E-Government Act of 2002. These laws prescribe specific activities that agencies must perform to protect privacy, and the Office of Management and Budget (OMB) has developed guidance on how and in what circumstances agencies are to carry out these activities. Many agencies designate officials as focal points for privacy-related matters, and increasingly, many have created senior positions, such as chief privacy officer, to assume primary responsibility for privacy policy, as well as dedicated privacy offices. GAO was asked to testify on key challenges facing agency privacy officers. To address this issue, GAO identified and summarized issues raised in its previous reports on privacy. Agencies and their privacy officers face growing demands in addressing privacy challenges. For example, as GAO reported in 2003, agency compliance with Privacy Act requirements was uneven, owing to ambiguities in guidance, lack of awareness, and lack of priority. While agencies generally did well with certain aspects of the Privacy Act's requirements--such as issuing notices concerning certain systems containing collections of personal information--they did less well at others, such as ensuring that information is complete, accurate, relevant, and timely before it is disclosed to a nonfederal organization. In addition, the E-Gov Act requires that agencies perform privacy impact assessments (PIA) on such information collections. Such assessments are important to ensure, among other things, that information is handled in a way that conforms to privacy requirements. However, in work on commercial data resellers, GAO determined in 2006 that many agencies did not perform PIAs on systems that used reseller information, believing that these were not required. In addition, in public notices on these systems, agencies did not always reveal that information resellers were among the sources to be used. To address such challenges, chief privacy officers can work with officials from OMB and other agencies to identify ambiguities and provide clarifications about the applicability of privacy provisions, such as in situations involving the use of reseller information. In addition, as senior officials, they can increase agency awareness and raise the priority of privacy issues. Agencies and privacy officers will also face the challenge of ensuring that privacy protections are not compromised by advances in technology. For example, federal agency use of data mining--the analysis of large amounts of data to uncover hidden patterns and relationships--was initially aimed at detecting financial fraud and abuse. Increasingly, however, the use of this tool has expanded to include purposes such as detecting terrorist threats. GAO found in 2005 that agencies employing data mining took many steps needed to protect privacy (such as issuing public notices), but none followed all key procedures (such as including in these notices the intended uses of personal information). Another new technology development presenting privacy challenges is radio frequency identification (RFID), which uses wireless communication to transmit data and thus electronically identify, track, and store information on tags attached to or embedded in objects. GAO reported in 2005 that federal agencies use or propose to use the technology for physical access controls and tracking assets, documents, or materials. For example, the Department of Defense was using RFID to track shipments. Although such applications are not likely to generate privacy concerns, others could, such as the use of RFIDs by the federal government to track the movement of individuals traveling within the United States. Agency privacy offices can serve as a key mechanism for ensuring that privacy is fully addressed in agency approaches to new technologies such as data mining and RFID. |
Mr. Chairman and Members of the Committee: We are pleased to be here today to discuss your efforts to modernize the federal bank oversight structure. Recent financial market developments have clearly demonstrated that our existing regulatory structure has not kept pace with the dramatic and rapid changes that are occurring in domestic and global financial markets. Banking, securities, futures, and insurance are no longer separate and distinct industries that can be well regulated by the existing patchwork quilt of federal and state agencies. We believe that a critical first step in modernizing oversight is to begin consolidating the activities of the four federal agencies currently responsible for the regulation and supervision of almost 12,000 federally insured banks and thrifts. We recognize, however, that restructuring involves difficult and long-standing issues, and commend the efforts of you and your committee to address needed reforms. Our work over the past few years has shown that, despite good faith efforts to coordinate their policies and procedures, the four federal banking regulators have often differed on how laws should be interpreted, implemented, and enforced; how banks should be examined; and how the federal government should respond to troubled institutions. Bankers also contend that multiple examinations and reporting requirements add to their regulatory burden and contribute to their competitive disadvantage with regard to other financial institutions, both foreign and domestic, that are not subject to the same regulatory regime. Furthermore, U. S. bank holding companies are examined by the Federal Reserve, while their subsidiaries can be examined separately by several other regulatory authorities. Thus, there is often overlap and no clear accountability for the operations of U. S. banking organizations as a whole. identified four fundamental principles based on all of our work that we believe Congress could use in considering the best approach for modernizing our current regulatory structure. Specifically, we believe that structural reform should provide for more consolidated and comprehensive oversight of companies owning federally insured banks and thrifts, with coordinated functional regulation and supervision of individual components; independence from undue political pressure, balanced by appropriate accountability and adequate congressional oversight; consistent rules, consistently applied for similar activities; and finally, enhanced efficiency and reduced regulatory burden. Over the past 2 years, we have completed studies on the structure and operation of bank oversight in Canada, France, Germany, and the United Kingdom (U.K.), and are in the process of completing a fifth report on bank oversight in Japan. Each of the five foreign oversight structures we studied reflects an unique history, culture, and banking industry, and as a result, no two of the five are identical. Furthermore, all of the countries we reviewed had more concentrated banking industries than does the United States, and all but Japan have authorized their banks to conduct broad securities and insurance activities in some manner. Although we did not attempt to assess the effectiveness of bank oversight in these countries, we found that each reflected these four principles in some way, and with few, if any, exceptions, each had fewer national agencies involved with bank regulation and supervision than is the case in the United States; had substantial oversight roles for their central banks, and ensured that their ministries of finance were, at the least, kept informed of important industry and supervisory developments; had relatively narrow roles for their deposit insurers; and lastly, incorporated mechanisms and procedures to ensure consistent, consolidated oversight and limit regulatory burden. In the five countries we studied, banking organizations typically were subject to more consolidated and comprehensive oversight, with an oversight entity being legally responsible and accountable for the entire banking organization, including its subsidiaries. If securities, insurance, or other nontraditional banking activities were permissible in bank subsidiaries, functional regulation of those subsidiaries was generally provided by the appropriate supervisory authority. Bank supervisors generally relied on those functional regulators for information, but remained responsible for ascertaining the safety and soundness of the consolidated banking organization as a whole. The number of national bank oversight entities in the countries we studied ranged from one in the U.K., to three in France. In all five countries, however, no more than two national agencies were ever significantly involved in any one major aspect of bank oversight, such as chartering, regulation, supervision, or enforcement. Commercial bank chartering, for example, was the direct responsibility of only one entity in each country. In those countries where two entities were involved in the same aspect of oversight, the division of oversight responsibilities was generally based on which entity had the required expertise. In Germany, for example, many oversight responsibilities were shared between the central bank and the federal bank supervisor. Yet, each of the two had a relatively well-defined role, agreed upon by both entities, based on their relative strengths and certain legal requirements. For example, the central bank, with more staff and a broader geographic presence than the federal bank supervisor, collected and analyzed bank data and had responsibility for most day-to-day supervision. The federal bank supervisor, on the other hand, had more responsibilities based in law, such as those of issuing banking regulations and taking formal enforcement actions. central bank was one of two principal oversight agencies. And while the Bank of Canada had no direct responsibility for bank oversight, it was included on the deposit insurance board and two advisory committees, which gave it access to information about the banking industry and some influence in supervisory matters. In each of the five countries, the national government recognized that it had the ultimate responsibility to maintain public confidence and stability in the financial system. Thus, each of the bank oversight structures that we reviewed also provided the Ministry of Finance, or its equivalent, with some degree of influence over bank oversight and access to information. In France, for example, the Ministry of Economic Affairs was represented on each of three bank oversight committees and chaired one of them. In Germany and Canada, the principal bank supervisor reported to the Minister of Finance. Similarly, the Bank of England reported to the Chancellor of the Exchequer. And in Japan, the Minister of Finance was the principal banking supervisor. While each country included its central bank and finance ministry in some capacity in its oversight structure, most also recognized the need to guard against undue political influence by incorporating checks and balances unique to each country. In France, for example, a three committee bank oversight structure was designed expressly to ensure that no single entity could dominate or dictate decisionmaking. Likewise, Canada’s oversight structure had multiple committees designed to share information and responsibilities among all of the oversight entities. And in Germany, the influence of a strongly independent central bank helped balance decisionmaking. oversight entities with the insurer frequently only involved when its funds were needed to help finance resolutions. Even the Canadian deposit insurer, which is similar to the Federal Deposit Insurance Corporation (FDIC) in many ways, relied principally on the primary banking supervisor for examination information to safeguard its insurance funds. It did, however, sometimes use its backup oversight authority—including requesting special examinations—to obtain additional information and insight into the safety and soundness of high-risk institutions. Most of the foreign structures with multiple oversight entities incorporated mechanisms and procedures designed to ensure consistent oversight and limit regulatory burden. As a result, banking institutions that were conducting the same lines of business were generally subject to a single set of rules, standards, or guidelines. Coordination mechanisms included having oversight committees or commissions with interlocking boards, shared staff, or mandates to share information. In France, for example, central bank employees staffed all three committees charged with oversight responsibilities for chartering, rule-making, and supervision. And the central bank and Ministry of Economic Affairs also had a seat on each of the three committees. In Canada, the federal bank supervisor, central bank, and finance ministry each had seats on the Canada Deposit Insurance Corporation’s board of directors and together with the deposit insurer, participated on various advisory committees. In Germany, the central bank and federal bank supervisor used the same data collection instruments and were legally required to share information that could be significant in the performance of their duties. objectives for reviewing a bank’s activities could differ from those of a supervisor, and that a degree of conflict could exist between the external auditors’ responsibilities to report to both their bank clients and to the bank supervisory authorities. However, they believed that their authority over auditors’ engagements was sufficient to ensure that the external auditors properly discharged their responsibilities and openly communicated with both their bank clients and the oversight authorities. Unlike in the United States, bank oversight in the countries we studied also avoided a potential area of added burden by focusing almost exclusively on ensuring the safety and soundness of banking institutions and the stability of financial markets, and not on consumer protection or social policy issues. Rather than using the bank oversight function, the national governments in these countries used other mechanisms to promote social goals. Specifically, some of the policy mechanisms used to encourage credit and other services in low- and moderate-income areas in these countries included the chartering of specialized financial institutions and direct government subsidies for programs to benefit such areas. In France, for example, specialized financial institutions provided financing for affordable housing. In Canada, France, and the U.K., the banking industries, not regulators, developed voluntary guidelines related to consumer and small business lending. Only in France were bank supervisors responsible for enforcing compliance with these kinds of guidelines and best practices. Federal Reserve into a new federal banking agency or commission. Congress could provide for this new agency’s independence in a variety of ways, including making it organizationally independent like FDIC or the Federal Reserve. This new independent agency, together with the Federal Reserve, could be assigned responsibility for comprehensive, consolidated supervision of those banking organizations under their purview, with appropriate functional supervision of individual components. 2. Include both the Federal Reserve and the Treasury Department in bank oversight: To carry out its primary responsibilities effectively, the Federal Reserve should, in some capacity, have direct access to supervisory information, as well as some ability to influence supervisory decisionmaking. The foreign oversight structures we reviewed showed that this could be accomplished by having the Federal Reserve be either a direct or indirect participant in bank oversight. For example, the Federal Reserve could maintain its current direct oversight responsibilities for state chartered member banks or be given new responsibility for some segment of the banking industry, such as the largest banking organizations. Alternatively, the Federal Reserve could be given major roles on the board of a new consolidated banking agency and on FDIC’s board of directors. Under this alternative, Federal Reserve staff could help support some of the examination or other activities of a consolidated banking agency to better ensure that the Federal Reserve receives first hand information about, and access to, the banking industry. Even if the Federal Reserve maintains its current direct role in bank supervision, Congress may wish to consider having the Federal Reserve replace OTS on the FDIC board of directors if Congress decides to merge OTS with another agency. responsibility for protecting the deposit insurance funds. Such authority should require coordination with other responsible regulators, but should also allow FDIC to go into any problem institution on its own without the prior approval of any other regulatory agency. FDIC also needs backup enforcement power and the capability to assess the quality of bank and thrift examinations generally. 4. Incorporate mechanisms to help ensure consistent oversight and reduce regulatory burden: Just reducing the number of federal bank oversight agencies from the current four would, of course, help improve the consistency of oversight and reduce regulatory burden. Should Congress decide to continue to have more than one primary federal bank regulator, we believe that mechanisms should be incorporated to enhance their cooperation and coordination and reduce burden. Such mechanisms could include expanding the current mandate of the Federal Financial Institutions Examination Council to ensure consistency in rule-making for similar activities as well as consistency in examinations; assigning specific rule-making authority in statute to a single agency, as has been done in the past when the Federal Reserve was given statutory authority to issue rules for several consumer protection regulations that are enforced by all of the bank regulators; requiring enhanced cooperation between examiners and banks’ external auditors; (While we strongly support requirements for annual full-scope, on-site examinations for large banking organizations, we believe that examiners could take better advantage of the work already being done by external auditors to better plan and target their examinations.) requiring enhanced off-site monitoring to better plan and target examinations, as well as to identify and raise supervisory concerns at an earlier stage. Mr. Chairman, this concludes my statement, and I would be pleased to respond to any questions that you or other Members of the Committee may have. OCC and OTS report to Treasury. The Board of Directors of the FDIC includes the heads of OCC and OTS as well as three independent members, including the Chairman and the Vice-Chairman who are appointed by the President and confirmed by the Senate. (Federal Reserve) (FDIC and State) (OTS) (Section 20) and State) (SEC) (CFTC) Foreign Branch (Federal Reserve and OCC) (OTS and State) Thrift (OTS) U.K. Number of national agencies authorized to issue bank regulations Number of national agencies authorized to perform major supervisory functions Consolidated and comprehensive oversight of a banking organization? Mechanisms to ensure cooperation and coordination among regulatory bodies built into oversight system? Finance ministry included in key decisionmaking? Central bank had supervisory access to and influence over banking industry? Deposit insurer had supervisory access to and influence over banking industry? Bank supervisors relied extensively on external auditors’ work, or intended to increase their reliance? Social policy goals were major part of banking legislation, regulations, or oversight? Bank Regulatory Structure: Canada (GAO/GGD-95-223, Sept. 28, 1995). Bank Regulatory Structure: France (GAO/GGD-95-152, Aug. 31, 1995). Bank Regulatory Structure: The United Kingdom (GAO/GGD-95-38, Dec. 29, 1994). Bank Regulatory Structure: The Federal Republic of Germany (GAO/GGD-94-134BR, May 9, 1994). Financial Derivatives: Actions Needed to Protect the Financial System (GAO/GGD-94-133, May 18, 1994). Financial Regulation: Modernization of the Financial Services Regulatory System (GAO/T-GGD-95-121, March 15, 1995). Bank Regulation: Consolidation of the Regulatory Agencies (GAO/T-GGD-94-106, Mar.4, 1994). Bank and Thrift Regulation: FDICIA Safety and Soundness Reforms Need to Be Maintained (GAO/T-AIMD-93-5, Sept. 23, 1993). Bank Regulation: Regulatory Impediments to Small Business Lending Should Be Removed (GAO/GGD-93-121, Sept. 7, 1993). Bank Examination Quality: OCC Examinations Do Not Fully Assess Bank Safety and Soundness (GAO/AFMD-93-14, Feb. 16, 1993). Bank and Thrift Regulation: Improvements Needed in Examination Quality and Regulatory Structure (GAO/AFMD-93-15, Feb. 16, 1993). Bank Examination Quality: FDIC Examinations Do Not Fully Assess Bank Safety and Soundness (GAO/AFMD-93-12, Feb. 16, 1993). Bank Examination Quality: FRB Examinations and Inspections Do Not Fully Assess Bank Safety and Soundness (GAO/AFMD-93-13, Feb. 16, 1993). Banks and Thrifts: Safety and Soundness Reforms Need to Be Maintained (GAO/T-GGD-93-3, Jan. 27, 1993). Bank Supervision: OCC’s Supervision of the Bank of New England Was Not Timely or Forceful (GAO/GGD-91-128, Sept. 16, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed efforts to modernize the U.S. federal bank oversight structure, focusing on: (1) how the U.S. oversight structure compares with five other industrialized countries; and (2) whether these countries' structures can be used to assist U.S. reform. GAO noted that: (1) U.S. structural reform should include a more consolidated and comprehensive oversight of companies owning federally insured banks and thrifts, independence from undue political pressure, appropriate accountability and adequate congressional oversight, consistent rules, and enhanced efficiency and reduced regulatory burden; (2) the five countries have oversight structures that are diverse, banking industries that are more concentrated than in the U.S., and banks that are authorized to conduct broad securities and insurance activities; (3) in each of the five countries, there are no more than two national agencies involved in oversight operations; (4) each country was heavily involved in bank oversight and ensured that its ministry of finance was informed of important industry and supervisory developments; (5) the countries' oversight structures have systems of checks and balances to guard against political pressure and maintain the public trust and stability in its financial system; (6) each country has given deposit insurers limited roles and viewed them as primarily a source of funding for bank failures; (7) the oversight structures incorporate mechanisms to ensure consistent oversight and limited regulatory burden; and (8) there are a number of ways that the U.S. can simplify its bank oversight structure. |
Suspensions and debarments apply governmentwide—one agency’s suspension or debarment decision precludes all other agencies from doing business with an excluded party. Suspensions and debarments may be either statutory or administrative. Statutory debarments, also referred to as declarations of ineligibility, are based on violation of law, such as statutory requirements to pay minimum wages. Administrative debarments are based on the causes specified in the FAR, including commission of offenses such as fraud, theft, bribery, or tax evasion. In 1988, the Nonprocurement Common Rule (NCR) was implemented to provide a parallel process to the FAR for suspending and debarring parties from receiving federal grants, loans, and other nonprocurement transactions. The FAR and NCR provide for reciprocity with each other— that is, any exclusion under the FAR shall be recognized under NCR, and any exclusion under NCR shall be recognized under the FAR. Exclusions of companies or individuals from federal contracts (procurements) or other federal funding such as grants (nonprocurements), as well as declarations of ineligibility, are listed in EPLS, a Web-based system maintained by GSA. EPLS also includes an archive of expired exclusions. Agencies are required to report all excluded parties by entering data directly into the database within 5 working days after the exclusion becomes effective. The FAR includes a list of the information to be included in EPLS, such as the contractor’s name and address, contractor identification number, the cause of the action, the period of the exclusion, and the name of the agency taking the action. From January 1995 to November 2004, the number of exclusion actions taken each year by all agencies governmentwide has ranged from about 3,400 in 1995 to almost 7,000 in 2002, with an average of 5,700 actions taken annually (see fig.1). In November 2004, the number of current exclusions governmentwide totaled about 32,500, about 3,500 of which were the result of statutory debarments. Of this governmentwide total, EPLS showed that the 6 agencies we reviewed had excluded about 2,400 parties, 617 of which were the result of statutory debarments by EPA, based on violations of the Clean Water and Clean Air Acts (see fig. 2). For exclusion actions taken each year by the six selected agencies from 1995 to 2004, see appendix III. In 1987, we reported that the suspension and debarment regulations and procedures generally provided an effective tool for protecting the government against doing business with fraudulent, unethical, or nonperforming contractors. We noted, however, that there was a need for timely access to a governmentwide list of excluded parties. We also identified areas for improvement in the process and recommended amendments to the FAR. The following recommendations have been implemented: (1) that governmentwide exclusions be extended to contractors proposed for debarment; (2) that the definition of affiliate, i.e., related firms or those under common control, include a description of indicators of control, such as common management or ownership; (3) that suspended and debarred contractors also be excluded from subcontracting under government contracts; and (4) that the extent to which orders placed under certain contractual arrangements—such as multiple awards schedules, basic ordering agreements, and indefinite quantity contracts—are covered by exclusions be clarified. The FAR prescribes general policies governing the circumstances under which contractors may be excluded from federal contracting, requires agencies to establish a process for determining exclusions, and allows agencies the flexibility to supplement the FAR to implement the process. The supplements to the FAR and additional guidance developed by 24 agencies generally designate internal responsibilities for suspension and debarment procedures and intra-agency coordination. As an alternative to exclusion, agencies sometimes enter into administrative agreements with contractors with whom they believe there is a continuing need to do business. These agreements can encourage changes in business practices designed to promote contractor responsibility. In limited circumstances, an agency may continue to do business with excluded contractors. The FAR requires federal agencies to conduct business only with responsible contractors and prescribes overall suspension and debarment policies. A suspension may be imposed only when an agency determines that immediate action is necessary to protect the government’s interests. To initiate a suspension, an agency must have adequate evidence that the party has committed certain civil or criminal offenses or that there is another compelling cause affecting the contractor’s present responsibility. Generally, legal proceedings must begin within 12 months or the suspension terminates. To initiate a debarment, an agency must have evidence of conviction or civil judgment for certain offenses, a preponderance of evidence that the party has committed certain offenses, such as serious failure to perform to the terms of a contract, or any other cause of so serious or compelling a nature that it affects the contractor’s present responsibility. The agency debarring official is responsible for determining whether debarment is in the government’s interest, and the FAR states that the seriousness of the contractor’s actions and any remedial measures or mitigating factors should be considered. Generally, the period of debarment should not exceed 3 years. Figure 3 provides a general overview of the suspension and debarment process. The FAR allows agencies flexibility to supplement FAR provisions and develop guidance based on agency needs. The 24 agencies we reviewed had included suspension and debarment policies in FAR supplements; 21 had also adopted NCR; and 12 had developed additional guidance, such as directives and policy memos to implement their suspension and debarment processes (see table 1). The additional guidance generally designates responsibilities for suspension and debarment procedures and addresses intra-agency coordination. Each of the six agencies we reviewed in depth—the Air Force, Army, Navy, Defense Logistics Agency, EPA, and GSA—has included suspension and debarment policies in FAR supplements, adopted NCR, and developed guidance for implementing suspension and debarment procedures: The Defense Federal Acquisition Regulation Supplement (DFARS) designates suspension and debarment officials in the various DOD organizations—including the Air Force, Army, Navy, and Defense Logistics Agency—and a process for waiving contractor exclusions for compelling reasons. In addition, in September 1992, the Under Secretary of Defense for Acquisition issued guidance stating that (1) when appropriate, before action is taken on suspension, a contractor should be informed that DOD has extremely serious concerns with the contractor’s conduct, and the contractor should be allowed to provide information on its behalf, and (2) DOD debarring officials should coordinate fully within DOD, and in certain cases among civilian agencies, to determine the possible effects of the suspensions and debarments on other organizations as well as to receive additional information that may affect the exclusion decision. EPA’s Acquisition Regulation, a FAR supplement, designates the roles of various officials and clarifies EPA’s suspension and debarment procedures. An August 1993 memorandum of understanding provides specific responsibilities for EPA’s Office of Acquisition Management and Office of Grants and Debarment in the processing of suspension and debarment actions. In addition, EPA has established guidance on initiating a suspension or debarment action. EPA also included a specific section in NCR addressing EPA’s statutory disqualifications under the Clean Air and Clean Water Acts. GSA also supplemented the FAR with a regulation that designates the roles of various officials and clarifies suspension and debarment procedures. The GSA Acquisition Manual contains similar language to the FAR supplement. In addition, GSA’s Office of Inspector General Operations Manual outlines responsibilities for investigating cases, coordinating with law enforcement agencies, and making referrals to GSA’s suspension and debarment officials. In November 2002, GSA issued an internal order concerning the requirement for legal review of suspension and debarment decisions. Each of the agencies we reviewed established an organizational structure that identifies the lead office, responsibilities, and staffing to manage their suspension and debarment activities. (See app. IV for a summary of each agency’s suspension and debarment organizational structure.) Table 2 shows specific actions reported by the six agencies we reviewed during fiscal year 2004. Administrative agreements, also referred to as compliance agreements, provide an alternative to exclusion when contractors that are being considered for suspension or debarment have addressed the cause of the problem through actions such as disciplining individuals, revising internal controls, and disclosing problems to the appropriate government agency in a timely manner. Under administrative agreements, contractors agree to meet certain requirements and may continue to enter into contracts with the government. Agency officials said that reaching administrative agreements with contractors can serve the government’s interest by improving contractor responsibility, ensuring compliance through monitoring the requirements of the agreement, and maintaining competition among contractors. Administrative agreements can be negotiated at any point in the suspension and debarment process, such as when a contractor independently acknowledges a problem, but the agencies we reviewed in depth said these agreements are most commonly negotiated as an alternative to debarment. These agreements generally follow a consistent format, emphasize corporate ethics programs, and are in effect for a period of 3 years. Table 3 summarizes the key contractor requirements included in the agreements we reviewed. While administrative agreements provide an alternative to exclusion, agencies can continue to do business with excluded contractors in limited circumstances through the use of waivers by making a determination that there is a compelling reason to award a contract to an excluded party. This determination requires a written explanation of the reason for doing business with an excluded contractor, such as an urgent need for the contractor’s supplies or services, or that the contractor is the only known source. Of the six agencies we reviewed, only the Air Force and the Army reported that compelling reason waivers had been issued over the past 2 years. The Air Force reported that three waivers had been granted—in August and September 2003 and in August 2004—to continue contracting with the Boeing Company for launch services for military space equipment based on national security concerns and to mitigate program schedule and cost risks. In fiscal year 2004, the Air Force issued one waiver for sole- source reasons, and the Army issued four waivers based on urgent need. Suspension and debarment constitutes exclusion of all divisions or other organizational elements of the contractor, unless the exclusion decision is otherwise limited. Exclusions may extend to affiliates, if named in the suspension or debarment notice and decision. Organizational entities of excluded contractors that can demonstrate independence may be allowed to receive government contracts. The information in EPLS may be insufficient to enable contracting officers to determine with confidence that a prospective contractor is not currently suspended, debarred, or proposed for debarment. Further, information on administrative agreements and compelling reason waivers is not routinely shared among agencies or captured centrally in a database such as EPLS. The Interagency Suspension and Debarment Committee (ISDC), which monitors the suspension and debarment system, provides a useful forum for sharing information among suspension and debarment officials. The FAR requires agencies to enter various information on contractors into EPLS, including contractors’ and grantees’ Data Universal Numbering System (DUNS) number—a unique nine-digit identification number assigned by Dun & Bradstreet, Inc. to identify unique business entities. We found, however, that while the EPLS database has a field for entering contractors’ DUNS numbers, it is not a required field in the database, and the data appear to be routinely omitted from the database. For the 6 agencies we reviewed in depth, about 99 percent of records in the EPLS database as of November 2004 did not have DUNS contractor identification numbers. To ensure that excluded contractors do not unintentionally receive new contracts during the period of exclusion, the FAR and NCR require contracting officers and awarding officials to consult EPLS and identify any competing contractors that have been suspended or debarred. Because EPLS lacks unique identifiers for most of the cases for the six agencies we reviewed in depth, contracting officers use the competing contractor’s name to search the system to determine whether a prospective contractor has been excluded from doing business with the federal government. However, a contractor’s name as it appears in a bid or proposal may not be the same as in EPLS. For example, the XYZ Company may submit bids or proposals using “XYZ Company” but appear as “XYZ” in EPLS. Therefore, if the contracting officer searched for an exact match, EPLS would not identify the company. Searching for partial matches would fail to identify companies that have changed their names. According to agency suspension and debarment officials, contracting officers have overlooked excluded contractors when using EPLS, due in part to not being able to match contractor names. Though agency officials could not recall specific cases, they said that this difficulty in matching names is more likely to occur in cases in which contractors have changed their names. We too had difficulty matching names using EPLS. For example, because of the various ways a contractor’s name might be entered in the database and because contractor names sometimes change over time, we could not be assured that we identified all contractors that have been excluded more than once. We also attempted to match contractors’ names in EPLS and FPDS—the database containing government contracting actions—to determine whether excluded contractors had received new contracts during a period of exclusion. Although this effort did not produce any matches, we cannot conclude with confidence that excluded contractors are not receiving new contracts because of the lack of consistency regarding contractor names both between and within the databases. This problem has been longstanding. In our 1987 report, we noted similar difficulties in matching data from the list of excluded parties with FPDS data. Despite our findings, the problem continues, increasing the risk that suspended or debarred contractors will be awarded new contracts during a period of exclusion. The overall reliability of reported data is also a concern. According to GSA officials, responsibility for ensuring data reliability rests with the agencies entering data into EPLS. GSA does not know, however, whether agencies have tested the reliability of their EPLS data. The absence of information on data reliability makes using the system for oversight or analysis problematic. For example, when we attempted to use EPLS to determine the average length of time of exclusions, we found many records with an indefinite termination date. In some cases, parties are listed as excluded for an indefinite period of time pending the outcome of a case. In nonprocurement cases, parties also may be excluded for an indefinite period of time. However, when a record is entered in EPLS without a termination date, the system defaults to record the termination date as indefinite. In the absence of information on data reliability, there is no way to estimate the extent to which the entries with indefinite termination dates reflect parties that had been excluded for an indefinite period of time or parties for which no termination date had been entered. The Interagency Suspension and Debarment Committee (ISDC) is responsible for coordinating policy, practices, and information sharing on various suspension and debarment issues. The ISDC serves as an interagency forum and conducts monthly meetings for federal agencies’ suspension and debarment officials. While ISDC is not a decision-making body, it develops recommendations for the Office of Management and Budget (OMB) on interagency issues, such as determining which agency should take the lead on a case when more than one agency does business with a particular contractor. The ISDC reports to OMB’s Office of Federal Financial Management and has been chaired by EPA’s suspension and debarment officer since 1988. In its March 2002 report on interagency coordination, the ISDC emphasized the importance of identifying a lead agency to coordinate with other federal agencies that do business with a contractor before entering into an administrative agreement. In our discussions with several suspension and debarment officials they said that, in addition, sharing information on past and current administrative agreements within the broader community of suspension and debarment officials would also be useful. They said that when an agency official is considering taking action with respect to a particular contractor, it would be helpful to know whether another agency had ever used an administrative agreement with that contractor, what the terms of the agreement were, and whether the contractor had complied with the agreement. That information is not currently collected centrally nor routinely made available to all suspension and debarment officials. Of the agencies we reviewed, only the Army has taken initiative to share information on administrative agreements. In February 2005, the Army launched the “Army Fraud Fighter’s Web Site,” which includes a list of contractors with which they have entered into administrative agreements. Similarly, greater sharing of information on compelling reason waivers also would be helpful. We found that information on compelling reason waivers was not readily available from most agencies we reviewed. To obtain information on compelling reason waivers, we had to reconcile the information we collected from the DOD agencies with information we collected from GSA for those agencies. The FAR supplement for DOD requires DOD to provide written notice of any compelling reason waiver determination to GSA, but we had to make repeated requests from DOD agencies and GSA in order to obtain complete information. In our view, accountability and transparency of the process would be enhanced were this information routinely collected and reported by all agencies. For example, more information on the use of waivers would allow suspension and debarment officials to evaluate patterns in the use of waivers to determine whether they were used more commonly in some industries than others. They could also assess the rationales cited by agencies in granting waivers to determine whether agencies are applying standards consistently or whether the governmentwide standards are in need of revision. Federal agencies faced with the challenge of ensuring that they only do business with responsible contractors may not be identifying excluded contractors when awarding new contracts. Improving the EPLS database by requiring agencies to enter contractor identification numbers into the system could provide the data needed to enhance agency confidence that excluded contractors can be readily identified. Sharing information among agencies on administrative agreements and compelling reason waivers could also improve the transparency and effectiveness of the suspension and debarment process and thereby help to ensure the government’s interests are protected. To improve the effectiveness of the suspension and debarment process, we are making two recommendations that the Administrator of General Services modify the EPLS database to require contractor identification numbers for all actions entered into the system and the Director of the Office of Management and Budget require agencies to collect and report data on administrative agreements and compelling reason determinations to the Interagency Suspension and Debarment Committee and ensure that these data are available to all suspension and debarment officials. We provided a draft of this report to DOD, EPA, GSA, and OMB for review and comment. DOD provided written comments which are included in appendix V. EPA provided technical comments on the draft, and we have incorporated these comments into the report as appropriate. GSA and OMB provided oral comments. DOD generally concurred with our recommendations. In addition to requiring the contractor identification numbers for all actions entered into the system, DOD believes that the EPLS database should include a field for the Contractor and Government Entity (CAGE) code, if available. DOD stated that given the automated procurement system used by many DOD offices, it is important to enable these offices to check for the CAGE code of a prospective contractor in the EPLS database. DOD also provided technical comments on the draft report, and we have revised the draft accordingly. GSA concurred with our recommendation that GSA modify the EPLS database to require contractor identification numbers for all actions entered into the system. GSA stated that it is in the process of competing the EPLS application, and the identification number will be a required field when the updated system becomes operational in fiscal year 2006. In addition, the updated system will be required to interface with the Central Contractor Registration System, which should improve the quality of contractor data in EPLS. The new system also should have greater capability to allow agencies to report information such as the reasons why a party has been excluded. OMB concurred with our recommendation that OMB require agencies to collect and report data on administrative agreements and compelling reason determinations to the Interagency Suspension and Debarment Committee and make this information available to all suspension and debarment officials. As agreed with your offices, unless you release this report earlier, we will not distribute it until 30 days from the date of this letter. At that time, we will send copies of this report to the Secretary of Defense, the Administrator of General Services, the Administrator of the Environmental Protection Agency, the Director of the Office of Management and Budget, and interested congressional committees. We will also make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 4841 or woodsw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report were Amelia Shachoy, Assistant Director, Marie Ahearn, Ken Graffam, Mehrunisa Qayyum, Emma Quach, Jeffrey Rose, Karen Sloan, and Cordell Smith. We conducted our work at six agencies—General Services Administration (GSA), Environmental Protection Agency (EPA), and four DOD agencies— Air Force, Army, Defense Logistics Agency (DLA), and Navy. The DOD agencies were selected on the basis of the dollar value of contracting actions reported in the Federal Procurement Data System (FPDS) for fiscal year 2003—-the year for which the most recent and complete data were available at the time of our review. We selected GSA because of its central role in federal procurement and in maintaining the Excluded Parties List System (EPLS). We selected EPA because of its active role in suspension and debarment, including its role in chairing the Interagency Suspension and Debarment Committee (ISDC) and in implementing systematic procedures for tracking the status of suspension and debarment cases. Together, these agencies accounted for about 67 percent of fiscal year 2003 federal contract spending, as reported in the FPDS. We also reviewed literature and interviewed government and nongovernment officials, academics, and private sector organizations with relevant experience. To describe the general guidance on the suspension and debarment process and how selected agencies have implemented the process, we examined the Federal Acquisition Regulation (FAR), Nonprocurement Common Rule (NCR), and the regulations and guidance of the 24 agencies that have issued supplements to the FAR governing suspension and debarment procedures. We analyzed documents and testimonial evidence at the 6 selected agencies to determine how each agency (a) used administrative agreements; (b) coordinated and shared suspension and debarment information; and (c) collected data to monitor the suspension and debarment process. To identify any needed improvements in the suspension and debarment process, we analyzed data from GSA’s EPLS as of November 18, 2004. This analysis included comparing the EPLS and FPDS databases to identify any suspended or debarred contractors that received a new contract during a period of suspension or debarment. We compared 44,634 records for excluded parties in EPLS with 1,006,919 contractors listed in FPDS at the end of fiscal year 2003, the latest year for which complete data were available at the time of our review. Because EPLS records do not require contractor identification numbers, we compared other identifiers, such as name and address, to determine whether a contract action in FPDS was for the issuance of a new contract during the period of exclusion. We also analyzed the data for the length of time parties are excluded and to determine the extent to which parties are excluded more than once. To assess the reliability of EPLS data we (1) performed electronic testing of the required data elements for obvious errors in accuracy and completeness, (2) reviewed related documentation, and (3) interviewed knowledgeable agency officials. We found the data to be insufficiently reliable for determining whether excluded contractors receive new contracts, for determining the termination dates of exclusions, or for performing simple analyses such as the average length of exclusions or the percentage of parties excluded more than one time. We also reviewed other areas for improvements, such as agencies’ internal data reporting and the role of the ISDC. We conducted our work from August 2004 through June 2005 in accordance with generally accepted government auditing standards. Statutory debarments, or exclusions, are based on statutory, executive order, or regulatory authority other than the FAR. The grounds and procedures for statutory debarments may be set forth in regulations issued by agencies, such as the Department of Labor and EPA, which have enforcement responsibilities but may not be the procuring agencies. The authorities for these statutory debarments use various terminology for exclusion, such as “ineligible,” “prohibited,” or “listing;” however, the terms all encompass sanctions precluding contract awards or involvement in a contract for a specific period of time. Table 4 lists the authorities identified in GSA’s EPLS as reasons for debarring individuals and contractors from receiving federal contracts. The FAR and NCR require agencies to establish a process for suspension and debarment. The organizational structure established to manage the process at the six agencies we reviewed is summarized in table 5. | Federal government purchases of contracted goods and services have grown to more than $300 billion annually. To protect the government's interests, the Federal Acquisition Regulation (FAR) provides that agencies can suspend or debar contractors for causes affecting present responsibility--such as serious failure to perform to the terms of a contract. The FAR provides flexibility to agencies in developing a suspension or debarment process. GAO was asked to (1) describe the general guidance on the suspension and debarment process and how selected agencies have implemented the process, and (2) identify any needed improvements in the suspension and debarment process. We examined the FAR and the regulations of 24 agencies that have FAR supplements governing suspension and debarment procedures. We selected 6 defense and civilian agencies representing about 67 percent of fiscal year 2003 federal contract spending for in-depth review. The FAR prescribes policies governing the circumstances under which contractors may be suspended or debarred, the standards of evidence that apply to exclusions, and the usual length of these exclusions. To implement these policies, 24 agencies developed FAR supplementation. In fiscal year 2004, the 6 agencies we reviewed in depth suspended a total of 262 parties and debarred a total of 590 parties. Five agencies entered into a total of 38 administrative agreements, which permit contractors that meet certain agency-imposed requirements to remain eligible for new contracts. Agency officials said that such agreements can help improve contractor responsibility, ensure compliance through monitoring, and maintain competition. In certain circumstances, agencies can continue to do business with excluded contractors, such as when there is a compelling need for an excluded contractor's service or product. In fiscal year 2004, two of the agencies we reviewed in depth--the Air Force and the Army--issued compelling reason waivers to continue doing business with excluded parties. To help ensure excluded contractors do not unintentionally receive new contracts during the period of exclusion, the FAR requires contracting officers to consult the Excluded Parties List System (EPLS)--a governmentwide database on exclusions--and identify any competing contractors that have been suspended or debarred. However, the data in EPLS may be insufficient for this purpose. For example, as of November 2004, about 99 percent of records in EPLS for the 6 agencies we reviewed in depth did not have contractor identification numbers--a unique identifier that enables agencies to conclude confidently whether a contractor has been excluded. In the absence of these numbers, agencies use the company's name to search EPLS, which may not identify an excluded contractor if the contractor's name has changed. Further, information on administrative agreements and compelling reason determinations is not routinely shared among agencies. Such information could help agencies in their exclusion decisions and promote greater transparency and accountability. |
FAA is responsible for setting standards, assessing compliance, and taking enforcement actions to ensure that the airlines meet safety standards. To carry out this responsibility, FAA monitors the airlines’ compliance with the Federal Aviation Regulations through periodic inspections. Those regulations set the standards for the airlines’ operations and maintenance functions. A number of possible indicators of aviation safety exist. In a 1988 report, we identified and assessed potential ways of measuring the airlines’ performance in areas important to safety. The accident rate is a widely recognized measure of overall aviation safety. However, because accidents occur so infrequently, there are no statistically significant differences in the accident rates among similar airlines. Also, because accident rates reflect what has already happened, their relevance to accident prediction or prevention can be limited. Among the other measures discussed in that report were information on inspection results, unsafe incidents, airlines’ financial condition, pilots’ competence, and maintenance quality. Safety-related aviation information varies in the extent to which it is available to the public. In general, “availability” indicates whether or not information is protected from dissemination by federal law. For example, the National Transportation Safety Board, the official source of information on airline accidents, routinely publishes information on aviation accidents. On the other hand, the public can obtain some other information only after making a request through the Freedom of Information Act (FOIA). According to FAA, information on the enforcement actions against regulated entities (i.e., air carriers, airports, manufacturers, schools, or repair stations) has generally been available to the public only through FOIA requests, or when FAA elects, on a case-by-case basis, to publicize an enforcement action. FAA began to take a number of actions to provide aviation safety-related information to the public in July 1996. The Administrator asked FAA’s Office of System Safety to assemble a working group of senior-level officials to determine how the FAA could most efficiently and effectively accomplish this task. In addition to FAA’s then-Deputy Administrator, the group included representatives from FAA’s offices of Regulation and Certification, Chief Counsel, Government and Industry Affairs, Civil Aviation Security, and Public Affairs. FAA solicited comments from the public and from the aviation community on how best to educate the public about, and make information available on, commercial aviation safety. FAA contracted with a consultant to generate a discussion of and obtain feedback on the types of aviation safety data that FAA might make available to the public, the means by which such information might be distributed, and the issues and considerations that arise in the distribution of these data. The contractor’s draft report was made available for public comment through the Federal Register on November 13, 1996. According to senior FAA officials, in deciding what means the agency would use to provide greater information to the public, FAA recognized the challenges of availability and accessibility. FAA noted the growing use of the Internet as an expedient and cost-effective means to provide information, especially to those in government, the aviation industry, academia, and the media. As a result, FAA announced on January 29, 1997, that it would use the Internet to pursue all three of its information strategies: establishing an aviation safety information web site linked to FAA’s Internet web site, publicizing significant enforcement actions, and undertaking a public education campaign on aviation safety. However, because broad sections of the general public may not have access to the Internet, FAA recognized that it might need to distribute safety information through some other supplementary means. FAA considered using toll-free telephone numbers to provide the public with certain safety information. However, on the basis of the experience of the National Highway Traffic Safety Administration, FAA decided that it lacked the staff resources to answer the large number of calls that it might receive. FAA subsequently decided to provide information, at least initially, through other public channels. As the Internet information effort develops, FAA expects to reassess the need for toll-free telephone access. FAA announced that beginning on February 1, 1997, it would issue press releases on newly issued enforcement actions concerning significant cases against regulated aviation entities that involve safety and security issues, including cases seeking civil penalties of $50,000 or more. As of April 16, FAA had issued press releases about three enforcement actions involving civil penalties, along with three instances in which it has revoked air carriers’ operating certificates. In addition to its normal procedures for issuing press releases, FAA has included them on its aviation safety information web site. FAA’s homepage is pictured in figure 1. FAA began its public education campaign about aviation safety on April 2, 1997. On the basis of the public comments received on the consultant’s draft report, FAA determined that it needed to explore more effective ways of communicating with consumers about aviation safety. To complement its information-sharing efforts, FAA’s public education campaign is designed to help the public better understand the safety of the overall system. FAA prepared a short overview of the aviation safety system and included it on its aviation safety information web site. In addition to the press release and public education information, the aviation safety information web site includes a link to a web site maintained by the FAA’s Office of System Safety, where the public can access and search several of the principal sources of aviation safety data and information that are used by the federal government. It also includes an explanation of how to use the data and cautions about how those calculations should and should not be interpreted. Figure 2 shows the information presented on the web site on aviation safety data. FAA plans to make public various aviation safety-related databases over time. When it was first made available to the public, the web site included three aviation safety databases: The NTSB Aviation Accident/Incident Database, which is the official repository of aviation accident data and causal factors. NTSB generally defines an “accident” as an occurrence associated with the operation of an aircraft in which individuals are killed or suffer serious injury, or the aircraft is substantially damaged. An NTSB-defined incident is an occurrence, other than an accident, associated with the operation of an aircraft that affects or could affect the safety of operations. The NTSB database contains only selected incident reports. As of April 9, 1997, this database included a total of 37,696 records of aviation accidents and incidents, dating back to 1983. By far, the vast majority (34,073, or approximately 90.4 percent) concerned general aviation aircraft accidents and incidents; 3,623 records (9.6 percent) concerned large or commuter air carriers’ accidents and incidents. The NTSB’s safety recommendations to FAA with FAA’s responses. NTSB uses information it gathers during accident investigations and the determination of probable cause to make safety recommendations to all elements of the transportation industry. The recipient of a recommendation must respond formally to the recommendation and specify what action is or is not being taken and why. This database includes the 3,471 recommendations made by NTSB to FAA since 1963, along with FAA’s responses. The FAA’s Incident Data System, which contains a more extensive collection of records of aviation incidents—potentially hazardous events that do not meet the aircraft damage or personal injury thresholds contained in NTSB’s definition of an accident. As of April 9, 1997, this database included a total of 67,057 records of aviation incidents, dating back to 1978. As with the NTSB’s Aviation Accident/Incident Database, a relatively small percentage (28.0 percent) of the total number of records concerned incidents experienced by large or commuter air carriers. Users cannot readily retrieve complete copies of these three databases. Rather, users may browse (i.e., look at) individual records, count records (e.g., all accidents involving commuter air carriers during a given time period), or select particular reports on the basis of user-supplied words or phrases (e.g., smoke) and/or user-selected criteria, such as the aircraft’s category of operation. FAA added another database on March 31, 1997, that provides the means by which the accident and incident information can be put into some context. FAA extracted this database—Airline Traffic Statistics—from information gathered by the Bureau of Transportation Statistics (BTS). It contains three selected measures of individual airlines’ operations: the number of departures, hours flown, and miles flown, by year, in domestic commercial service during the 5-year period from 1991 through 1995. Those statistics are the activity measures most frequently used to calculate accident and incident rates for the airlines. Unlike the first databases that FAA included on its web site, users cannot search the data on traffic statistics on the Internet. Users can, however, obtain a copy of this complete database from FAA’s web site, for use on their own computers. FAA includes warnings and disclaimers to explain the limitations of the databases it includes on its web sites. In general, these warnings and disclaimers state that the contents of the web sites are unofficial. FAA notes that the databases may not be complete and makes no certification about the accuracy of the data. Since FAA first established its aviation safety web site on the Internet, it has seen an approximately fourfold increase in the number of users who have accessed the safety data web site each week. FAA’s computers measure usage in several ways, and each indicates that usage of FAA’s site has grown since it was made public. The best measure of web site usage, according to FAA officials, is the number of users who have accessed the site. Although FAA cannot identify every individual user who accesses its site, it does count the number of users that access the web site over a period of time using a measure called a “user session.” In mid-January, before FAA publicly announced the availability of the web site, it averaged about 2,000 user sessions per week, even though the web site consisted mainly of a page explaining that the data will be available at a later date. After media attention about the availability of the web site in late January, the usage that week grew to almost 9,000 user sessions. After declining over several weeks, usage again grew after FAA added the searchable safety data to the site on February 28. FAA hosted about 8,200 user sessions during the last week of March. Figure 3 illustrates the number of user sessions per week for the safety data web site. In addition to an increase in the number of users, FAA’s data indicate that the public is utilizing the safety data web site more often than when it was first made available. First, FAA tracks the average time of each user session. The length of the average user session had grown to about 12 minutes in early April. Also, FAA tracks the number of times each user requests a file from FAA’s computers—called a “hit.” The average number of hits generated during each user session has also grown, from 18.6 in early January to as high as 31.9 in early April. (These data on the number of hits per user session are displayed in table 1.) According to FAA officials, the increases in both of these statistics indicate that users are finding the safety data more useful, possibly for research, than in the past. They added, however, that it is too early to tell if these trends will continue. Finally, FAA’s computers also keep track of the host computer of each user who accesses the safety data site. The user’s host computer is operated by the organization that provides access to the Internet, whether that organization is an Internet service provider (such as America Online, Compuserve, or Netcom, that mainly serve the public) or another organization, such as Boeing. These data indicate that many of the host computers that access FAA’s site most frequently are operated by Internet service providers. Other frequent users are the Air Force and airlines such as Delta, which operate host computers that are generally available only to their employees. These same data indicate that about 10 percent of those who access FAA’s site are doing so from a computer located outside the United States—mostly from Germany and Canada. Because FAA made the safety education material available only recently (on April 2, 1997), it has only limited information on the number of user sessions for that web site: FAA recorded 313 user sessions on that web site for the week ending April 5 and 515 for the week ending April 12. For the press releases on enforcement actions, however, FAA’s statistics indicate that weekly usage has generally fallen since FAA first made those press releases available, and fewer users have accessed this page than have accessed the safety data web site. Figure 4 shows the change in the number of weekly uses of the press release information since early February 1997. FAA plans to add other safety-related information to the web site gradually over time. By May 31, 1997, FAA plans to add data from the FAA National Airspace Incident Monitoring System, which includes information on near mid-air collisions. On June 1, 1997, FAA expects to make available a quarterly report of enforcement actions in the safety and security areas against aviation entities. This report, which describes enforcement actions closed with a civil penalty or orders of certificate suspension or revocation, will cover the first quarter of 1997. Thereafter, FAA expects to issue its quarterly enforcement reports about 30 days after the end of each quarter. FAA has also indicated that it will expand the available information on airline traffic statistics in two ways. First, it will add data for 1996 as soon as it receives them from BTS in June or July. In addition, FAA expects to add traffic statistics for commuter airlines. At present, the traffic statistics that FAA has posted are limited to ones on domestic operations by large air carriers (i.e., generally those that operate aircraft with more than 60 seats). In addition, by the end of September 1997, FAA will develop a new database that will provide certain basic information about each air carrier, such as the number of specific makes, the models, and the ages of the aircraft flown by the carrier and the date when the carrier was certificated by the Department of Transportation (DOT) and FAA to operate. According to FAA officials, the agency has not yet decided how much data should be provided from the existing FAA databases or whether some information could be better provided by the individual air carriers, perhaps in conjunction with their trade associations, through direct links between their respective Internet web sites and FAA’s. According to FAA officials, the agency also intends to evaluate its efforts to provide safety information to the public, but not until March 1998, after the web site has been in operation for approximately 1 year. In an October 1996 report on aviation safety, we concluded that the time had come for FAA to begin the process that can lead to publishing airline-specific safety data. The report recommended that the Secretary of Transportation instruct the Administrator of FAA to study the feasibility of developing measurable criteria for what constitutes aviation safety, including those airline-specific, safety-related performance measures that could be published for use by the traveling public. DOT concurred with that recommendation. FAA’s Internet web site represents a good first step toward providing the public with some aviation safety information. Providing the information in which FAA has the greatest confidence—NTSB’s accident/incident data, FAA’s incident data, and BTS’ traffic data—seems to be a reasonable approach. The early data on the usage of the web site indicate that the public has an interest in aviation safety data. FAA has said that evaluating its efforts will be an important aspect of its overall strategy of providing more information to the traveling public. We agree. Such an evaluation could help FAA determine whether it is meeting the needs of the traveling public and whether it should improve, refine, or expand its safety information, as well as improve the quality of the underlying data. It might also incorporate considerations of the extent to which the public finds these data easily usable, in view of the complexity and size of the posted databases. While it is too early to conduct an evaluation, FAA could begin the planning necessary to ensure that its evaluation produces meaningful results. We provided DOT and FAA with copies of a draft of this report. We met with DOT and FAA officials, including the Manager of FAA’s Safety Data Services Division, acting on behalf of the Deputy Assistant Administrator for System Safety. DOT and FAA officials agreed with the draft report’s overall message and provided editorial and technical comments that we incorporated as appropriate. The information in this report was developed through discussions with officials at FAA and analysis of data on the usage of FAA’s web site over time. We also reviewed previously issued GAO products, pertinent federal regulations, and FAA’s Internet web sites. We did not independently assess the quality of the data that FAA includes on its Internet web sites. We performed our review from March through mid-April 1997 in accordance with generally accepted government auditing standards. As you requested, unless you publicly announce its contents earlier, we plan no further distribution of this report for 30 days. We will then send copies to the Secretary of Transportation; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Major contributors to this report were Thomas Kai; Steve Martin; and James Sweetman, Jr. Please call me at (202) 512-3650 if you or your staff have further questions. Gerald L. Dillingham Associate Director, Transportation Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed: (1) what actions the Federal Aviation Administration (FAA) has taken to make aviation safety information more available to the public; (2) what the public demand has been for FAA's aviation safety information; (3) FAA's plans to expand the aviation safety information available to the public; and (4) FAA's progress in making safety information available to the public. GAO noted that: (1) FAA took a number of actions to provide aviation safety-related information to the public beginning in July 1996; (2) FAA formed a working group of senior-level agency officials and adopted a strategy of providing aviation safety information to the public through a three-part effort: (a) establishing an aviation safety information web site linked to the FAA's Internet web site; (b) publicizing significant enforcement actions; and (c) undertaking a public education campaign on aviation safety; (3) as of April 10, 1997, FAA has included four databases on its aviation safety Internet web site; (4) those databases include information on aviation accidents, other safety-related incidents, traffic data (e.g., departures made) reported by large commercial air carriers, which can be used to calculate comparative accident or incident rates, and the safety recommendations made by the National Transportation Safety Board to FAA; (5) since FAA first made its aviation safety site on the Internet available to the public, it has seen an approximate fourfold increase in the number of users who have accessed the web site each week; (6) usage has increased during those weeks when a public announcement related to the site has been made; (7) in addition, FAA's data indicate that users are spending more time using the site; (8) it is too soon, however, to tell if these trends will continue; (9) FAA plans to expand the number of databases that it posts on its aviation safety web site throughout the rest of 1997; and (10) it expects to incorporate information on the airlines' composition (i.e., the make, models, and ages of aircraft in each airline's fleet) and other indicators of aviation safety (e.g., data on near mid-air collisions). |
It is essential that DOD employ sound practices when using contractors to support its missions or operations to ensure the department receives value. This means clearly defining its requirements, using the appropriate contract type, and properly overseeing contract administration. Our work, however, has repeatedly identified problems with the practices DOD uses to acquire services. Further, an overarching issue that impacts DOD’s ability to properly manage its growing acquisition of services is having an adequate workforce with the right skills and capabilities. Collectively, these problems expose DOD to unnecessary risk and make it difficult for the department to ensure that it is getting value for the dollars spent. Since fiscal year 2001, DOD obligations for service contracts have doubled while its acquisition workforce has remained relatively unchanged (see fig. 1). Properly defined requirements—whether at the DOD-wide level or the contract level—are a prerequisite to obtaining value for the department. At the DOD-wide level the department should have an understanding of what it needs to contract for and why. However, we have frequently noted that the department continues to be challenged to understand how reliant it is on contractors and has yet to clearly determine what services it should obtain from contracts and what services should be provided by the military or DOD civilian employees. Furthermore, DOD lacks basic data about its service contracts that could help it determine how it contracts for services and how reliant it is on contractors. For example, at this time, the department does not have complete and accurate information on the number of services contracts in use, the services being provided by those contracts, the number of contractors providing those services, and the number and types of contracts awarded. Once DOD determines what services contractors should provide, both the contractor and the government need to have a clear sense of what the contractor is required to do under the contract. Poorly defined or changing requirements have contributed to increased costs, as well as services that did not meet the department’s needs. The absence of well-defined requirements and clearly understood objectives complicates efforts to hold DOD and contractors accountable for poor acquisition outcomes. For example: DOD sometimes authorizes contractors to begin work before reaching a final agreement on the contract terms and conditions, including price. These types of contract actions, known as undefinitized contract actions, are used to meet urgent needs or when the scope of the work is not clearly defined. In July 2007, we reported that, DOD contracting officials were more likely to pay costs questioned by Defense Contract Audit Agency (DCAA) auditors if the contractor had incurred these costs before reaching agreement with DOD on the work’s scope and price. In fact, DOD decided to pay the contractor nearly all of the $221 million in questioned costs after making a determination based on additional information. The lack of timely negotiations contributed significantly to DOD’s decision—all 10 task orders were negotiated more than 180 days after the work commenced. The negotiation delays were in part caused by changing requirements, funding challenges, and inadequate contractor proposals. In both July 2004 and September 2006 we reported that a disagreement between a contractor and DCAA on how to bill for services to feed soldiers in Iraq resulted in at least $171 million in questioned costs that DOD did not pay. The disagreement regarded whether the government should be billed on the camp populations specified in the statement of work or on the actual head count. A clearer statement of work, coupled with better DOD oversight of the contract, could have prevented the disagreement and mitigated the government’s risk of paying for more services than needed. Negotiations between the contractor and DOD resulted in a settlement whereby $36 million would not be paid to the contractor. On the other hand, requirements that provide DOD with a greater level of service or performance than required can undermine the department’s efforts to ensure value. For example: In December 2008, we issued a report on performance based logistics, which is defined by DOD as the purchase of performance outcomes (such as the availability of functioning weapon systems) through long-term support arrangements rather than the purchase of individual elements of support—such as parts, repairs, and engineering support. In that report, we noted for eight of the performance based logistics arrangements we reviewed, the contractors significantly exceeded some of the contractual performance requirements. We further noted that since the government is paying for this excess performance, the performance based logistics arrangement, as structured, may not provide the best value to the government. For example, since 2002, the average annual operational readiness for the Tube-launched, Optically-tracked, Wire-guided missile – Improved Target Acquisition System has not been below 99 percent, and the system’s operational readiness has averaged 100 percent since 2004. According to a program official, the Army’s readiness standard for this system is 90 percent. Despite the Army’s standard, it continued to include a performance incentive that encouraged higher levels of performance when negotiating a follow-on performance based logistics contract in 2007. The performance incentive includes payment of an award fee that encourages operational readiness rates from 91 to 100 percent, with the highest award fee paid for 100 percent average operational readiness. When contracting for services, DOD has a number of choices regarding the contracting arrangements to use. Selecting the appropriate type is important because cost reimbursable contracts may increase the government’s cost risk whereas firm-fixed price arrangements transfer some of that cost risk to the contractor. While use of the appropriate contract type is important, it is not the sole factor in a successful acquisition outcome—as noted in this statement, good requirements and oversight of contractor performance are also important. We have found that DOD did not always use the contracting arrangements that would result in the best value to the government. For example: In January 2008, we that reported the cost-plus-fixed fee provisions of a task order issued by the Army to repair equipment for use in Iraq and Afghanistan required the Army to pay the contractor to fix equipment rejected by Army inspectors for failing to meet the quality standard established in the task order. Under the cost-plus-fixed fee maintenance provisions in the task order, the contractor was reimbursed for all maintenance labor hours incurred, including labor hours associated with maintenance performed after the equipment failed to meet the Army’s maintenance standards. This resulted in additional cost to the government. Our analysis of Army data between May 2005 and May 2007 showed that the contractor worked about 188,000 hours to repair equipment after the first failed Army inspection at an approximate cost to the government of $4.2 million. In June 2007, we found numerous issues with DOD’s use of time-and- materials contracts. DOD reported that it obligated nearly $10 billion under time-and-materials contracts in fiscal year 2005, acquiring, among other services, professional, administrative, and management support services. Some specific examples of the services DOD acquired included subject matter experts in the intelligence field and systems engineering support. These time-and-materials contracts are appropriate when specific circumstances justify the risks, but our findings indicate that they are often used as a default for a variety of reasons—ease, speed, and flexibility when requirements or funding are uncertain. According to DOD, time-and- materials contracts are considered high risk for the government because they provide no positive profit incentive to the contractor for cost control or labor efficiency and their use is supposed to be limited to cases where no other contract type is suitable. We found, however, that DOD underreported its use of time-and-materials contracts, frequently did not justify why such contracts were the only contract type suitable for the procurement, and inconsistently monitored these contracts. In 2007, we also reported that DOD needed to improve its management and oversight of undefinitized contract actions (UCAs), under which DOD can authorize contractors to begin work and incur costs before reaching a final agreement on contract terms and conditions, including price. The contractor has little incentive to control costs during this period, creating a potential for wasted taxpayer dollars. DOD’s use of some UCAs could have been avoided with better acquisition planning. In addition, DOD frequently did not definitize the UCAs within the required time frames thereby increasing the cost risk to the government. Further, its contracting officers were not documenting the basis for the profit or fee negotiated, as required. As such, we called on DOD to strengthen management controls and oversight of UCAs to reduce the risk of paying unnecessary costs. In July 2004, we reported that the Air Force had used the Air Force Contract Augmentation Program contract to supply commodities for its heavy construction squadrons because it did not deploy with enough contracting and finance personnel to buy materials quickly or in large quantities. In many instances, the contractor provided a service for the customer, such as equipment maintenance, in addition to the procurement of the supplies. In other cases, however, the contractor simply bought the supplies and delivered them to the customer. In July 2004 we noted that the contract allowed for an award fee of up to 6 percent for these commodity supply task orders. While contractually permitted, the use of a cost-plus-award-fee contract as a supply contract may not be cost- effective. In these instances, the government reimburses the contractors’ costs and pays an award fee that may be higher than warranted given the contractors’ low level of risk when performing such tasks. Air Force officials recognized that the use of a cost-plus-award-fee contract to buy commodities may not be cost-effective. Under the current contract, commodities may be obtained using firm-fixed-price task orders, cost-plus award-fee task orders, or cost-plus-fixed-fee task orders. We reported on numerous occasions that DOD did not adequately manage and assess contractor performance to ensure that its business arrangements were properly executed. Managing and assessing post-award performance entails various activities to ensure that the delivery of services meets the terms of the contract and requires adequate surveillance resources, proper incentives, and a capable workforce for overseeing contracting activities. If surveillance is not conducted, is insufficient, or not well documented, DOD is at risk of being unable to identify and correct poor contractor performance in a timely manner. For example: Our 2008 review of six Army services contracts or task orders found that contract oversight was inadequate in three of the contracts we reviewed because of a lack of trained oversight and management personnel. For example, in the contracting office that managed two of the contracts we reviewed, 6 of 18 oversight positions were vacant. One of the vacant positions was the performance evaluation specialist responsible for managing the Army’s quality assurance program for two multi-million dollar contracts and training other quality assurance personnel. Other vacant positions included three contract specialists responsible for, among other tasks, reviewing monthly contractor invoices. As a result of these vacancies, the contracting officer’s representative was reviewing contractor invoices to ensure that expenses charged by the contractor were valid, a responsibility for which he said he was not trained. We also reported that contract oversight personnel for the Army’s linguist contract were unable to judge the performance of the contractor employees because they were generally unable to speak the languages of the contractor employees they were responsible for overseeing. DOD has, over the last several years, emphasized the use of performance based logistics arrangements, in part, to reduce the cost of supporting weapon systems. However, in December 2008, we reported that although DOD guidance recommends that cost data be captured for performance based logistics contracts to aid in future negotiations, we found program offices generally did not receive detailed cost data and only knew the overall amounts paid for support. For example, for the 21 fixed-price arrangements in our sample, only two program offices obtained contractor support cost data reports. We also reported that, in seven out of eight programs we reviewed where follow-on, fixed-price performance based logistics contracts had been negotiated, expected cost reductions either did not materialize or could not be determined. In our September 2008 review of services contracts supporting contingency operations, we reported the Army’s oversight of some of the contracts was inadequate in part because contracting offices were not maintaining complete contract files documenting contract administration and oversight actions taken, in accordance with DOD policy and guidance. As a result, incoming contract administration personnel did not know whether the contractors were meeting their contract requirements effectively and efficiently and therefore were limited in their ability to make informed decisions related to award fees, which can run into the millions of dollars. In December 2006, we reported that DOD did not have sufficient numbers of contract oversight personnel at deployed locations, which limits its ability to obtain reasonable assurance that contractors are meeting contract requirements efficiently and effectively. For example, an Army official acknowledged that the Army struggled to find the capacity and expertise to provide the contracting support needed in Iraq. Similarly, an official with the LOGCAP Program Office told us that the office did not prepare to hire additional budget analysts and legal personnel in anticipation of an increased use of LOGCAP services due to Operation Iraqi Freedom. According to the official, had adequate staffing been in place early, the Army could have realized substantial savings through more effective reviews of the increasing volume of LOGCAP requirements. A Defense Contract Management Agency official responsible for overseeing the LOGCAP contractor’s performance at 27 locations noted that he was unable to visit all of those locations during his 6-month tour to determine the extent to which the contractor was meeting the contract’s requirements. In December 2005, we reported that DOD, in using award fee contracts, routinely engaged in practices that did not hold contractors accountable for achieving desired acquisition outcomes. These practices included evaluating contractors on award-fee criteria not directly related to key acquisition outcomes; paying contractors a significant portion of the available fee for what award-fee plans describe as “acceptable, average, expected, good, or satisfactory” performance; and giving contractors at least a second opportunity to earn initially unearned or deferred fees. As a result, DOD had paid an estimated $8 billion in award fees on contracts in our study population, regardless of whether acquisition outcomes fell short, met, or exceeded DOD’s expectations. As such, we recommended that DOD improve its use of fees by specifically tying them to acquisition outcomes in all new award- and incentive-fee contracts, maximizing contractors’ motivation to perform, and collecting data to evaluate the effectiveness of fees. In March 2005, we reported instances of insufficient surveillance on 26 of 90 DOD service contracts we reviewed. In each instance, at least one measure to ensure adequate surveillance did not take place. These measures include (1) training personnel in how to conduct surveillance, (2) assigning personnel at or prior to contract award, (3) holding personnel accountable for their surveillance duties, and (4) performing and documenting surveillance throughout the period of the contract. GAO’s body of work on contract management and the use of contractors to support deployed forces have resulted in numerous recommendations over the last several years. In addition, Congress has enacted legislation requiring DOD to take specific actions to improve its management and oversight of contracts. In response, DOD has issued guidance to address contracting weaknesses and promote the use of sound business arrangements. DOD has established a framework for reviewing major services acquisitions, promulgated regulations to better manage its use of contracting arrangements that can pose additional risks for the government, including time-and-materials contracts and undefinitized contracting actions, developed guidance on linking monetary incentives for contractors to acquisition outcomes, and has efforts under way to identify and improve the skills and capabilities of its workforce. These are positive steps, but inconsistent implementation has hindered past DOD efforts to address these high-risk areas. To improve outcomes on the whole, DOD must ensure that these policy changes and others are consistently put into practice and reflected in decisions made on individual acquisitions. We have ongoing work assessing DOD’s efforts to implement a service acquisition management approach, including its development of a structure for reviewing its major services acquisitions, as well as its use of different types of contract arrangements. Section 801 of the National Defense Authorization Act for Fiscal Year 2002 required DOD to establish a management structure for the procurement of services, including developing a structure for reviewing individual service transactions, holding accountable employees responsible for procuring services, and collecting and analyzing service contract data. In addition, section 802 of the National Defense Authorization Act for Fiscal Year 2002 established a goal for DOD to use improved management practices to achieve savings in expenditures for procurement of services. In response to this requirement, DOD and the military departments established a service acquisition management structure, including processes at the headquarters level for reviewing individual, high-dollar acquisitions. The National Defense Authorization Act for Fiscal Year 2006 further developed the requirements for a management structure for the procurement of contract services. Among other things, the National Defense Authorization Act for Fiscal Year 2006 required DOD’s management structure to provide for the Under Secretary of Defense for Acquisition, Technology and Logistics (USDAT&L) to: establish contract services acquisition categories, based on dollar thresholds, for the purpose of establishing the level of review, decision authority, and applicable procedures identify the critical skills and competencies needed to carry out the procurement of services. The National Defense Authorization Act for Fiscal Year 2006 also required the USDAT&L and senior acquisition management officials within the military departments to ensure that competitive procedures and performance-based contracting are used to the maximum extent practicable. In 2006, DOD updated its policies aimed at strengthening how it plans, manages, and oversees services acquisition in response to the legislation. Later, in December 2008, DOD incorporated its acquisition review thresholds for major services acquisitions in DOD Instruction 5000.02, Operation of the Defense Acquisition System. The National Defense Authorization Act for 2008 required DOD to take additional actions to improve its visibility over the department’s reliance on services contractors as well as its management and oversight of its services acquisitions. Section 807 required DOD to provide Congress an annual inventory of contractor-provided services, to include information on the missions and functions of the contractor, the number of full-time contractor employees paid for performing the activity, and the organization whose requirements are being met through contractor performance. In addition, this provision required the military departments to review the inventory to identify activities that should be considered for conversion to performance by DOD civilian employees or to an acquisition approach that would be more advantageous to DOD. The first inventory was to have been reported to Congress not later than June 30, 2008. At this time however, only the Army has begun the process to comply with this requirement. According to DOD officials, the Air Force and Navy will issue their prototype inventories in the third quarter of fiscal year 2009. Section 808 required DOD to issue guidance and implementation instructions for performing periodic independent management reviews of contracts for services. In September 2008, DOD issued a policy memorandum to implement these reviews, referred to as peer reviews. Under DOD’s plan the Director, Defense Procurement, Acquisition Policy and Strategic Sourcing would be responsible for implementing reviews of acquisitions of services with an estimated maximum value of over $1 billion, while the DOD components would be responsible for reviews of acquisitions under $1 billion. In February 2009, DOD revised its guidance for how the review teams should conduct peer reviews to address pre-and- post-award review elements of the acquisition and the criteria that should be used to conduct these reviews. According to DOD officials, this guidance was developed as part of the agency’s response to some of the issues identified in our DOD contact management high risk area. We continue to monitor DOD’s implementation of these efforts. In late 2008, DOD began an effort, directed by the Chairman of the Joint Chiefs of Staff, to examine the department’s use of service contracts in Iraq and Afghanistan. The purpose of this effort is to improve DOD’s understanding of the range and depth of contractor capabilities necessary to support the Joint Force. The study will address where DOD is most reliant on contractor support, informing longer term force structure issues such as the potential for increasing DOD’s military and civilian work force in order to in-source services currently provided by contractors. We have also made numerous recommendations over the past 10 years aimed at improving DOD’s management and oversight of contractors supporting deployed forces, including the need for (1) DOD-wide guidance on how to manage contractors that support deployed forces, (2) improved training for military commanders and contract oversight personnel, and (3) a focal point within DOD dedicated to leading DOD’s efforts to improve the management and oversight of contractors supporting deployed forces. In addition, Section 854 of the National Defense Authorization Act for 2007 directed the Secretary of Defense in consultation with the Chairman of the Joint Chiefs of Staff to develop joint policies for requirements definition, contingency program management, and contingency contracting during combat and post-conflict operations. The National Defense Authorization Act for Fiscal Year 2008 added a new requirement directing that these joint policies provide for training of military personnel outside the acquisition workforce who are expected to have acquisition responsibilities including oversight of contracts or contractors during combat operations, post-conflict operations and contingency operations. As we reported in November 2008, while DOD has more to do in this area, it is developing, revising, and finalizing new joint policies and guidance on the department’s use of contractors to support deployed forces. Examples include: In October 2008, DOD finalized Joint Publication 4-10, Operational Contract Support, which establishes doctrine for planning, conducting, and assessing operational contract support integration and contractor management functions in support of joint operations. The joint publication provides standardized guidance and information related to integrating operational contract support and contractor management. DOD is revising DOD Instruction 3020.41, Program Management for the Preparation and Execution of Acquisitions for Contingency Operations, which strengthens the department’s joint policies and guidance on program management, including the oversight of contractor personnel supporting a contingency operation. DOD has also taken steps to improve the training of military commanders and contract oversight personnel. As we reported in November 2008, the Deputy Secretary of Defense issued a policy memorandum in August 2008 directing the appointment of trained contracting officer’s representatives prior to the award of contracts. U.S. Joint Forces Command is developing two training programs for non-acquisition personnel to provide information necessary to operate effectively on contingency contracting matters and work with contractors on the battlefield. In addition, the Army has a number of training programs available that provide information on contract management and oversight to operational field commanders and their staffs. The Army is also providing similar training to units as they prepare to deploy, and DOD, the Army, and the Marine Corps have begun to incorporate contractors and contract operations in mission rehearsal exercises. In October 2006, the Deputy Under Secretary of Defense for Logistics and Materiel Readiness established the office of the Assistant Deputy Under Secretary of Defense (Program Support) to act as the focal point for DOD’s efforts to improve the management and oversight of contractors supporting deployed forces. This office has taken several steps to help formalize and coordinate efforts to address issues related to contractor support to deployed forces. For example, the office took a leading role in establishing a community of practice for operational contract support— comprising subject matter experts from the Office of the Secretary of Defense, the Joint Staff, and the services—that may be called upon to work on a specific task or project. Additionally, the office helped establish a Joint Policy Development General Officer Steering Committee to guide the development of the Office of the Secretary of Defense, Joint Staff, and service policy, doctrine, and procedures to adequately reflect situational and legislative changes as they occur within operational contract support. In addition, DOD has efforts under way to identify and improve the skills and capabilities of its workforce. For example, in response to recommendations from the Gansler Commission, the Army proposed increasing its acquisition workforce by over 2,000 personnel. However, the Army also acknowledged that this process will take at least 3 to 5 years to complete. In addition, we continue to monitor DOD’s planned and completed corrective actions to address our audit report recommendations to improve its acquisition of services. As the largest buyer of services in the federal government, and operating in an environment where the nation’s large and growing deficits require difficult resource decisions, DOD must maximize its return on investment and provide the warfighter with needed capabilities at the best value for the taxpayer. DOD has recognized that it faces challenges with contract management and the department has taken steps to address these challenges, including those outlined in this testimony. These challenges are daunting. While DOD’s recent initiatives may improve how the department plans service acquisitions at a strategic level, these efforts will not payoff unless DOD’s leadership can translate its vision into changes in frontline practices. At this point, DOD does not know how well its services acquisition processes are working and whether it is obtaining the services it needs while protecting DOD’s and the taxpayer’s interests. While DOD has generally agreed with our recommendations intended to improve contract management, much remains to be done. For example: In the near term, DOD must act forcefully to implement new procedures and processes in a sustained, consistent, and effective manner across the department. Doing so will require continued, sustained commitment by senior DOD leadership to translate policy into practice and to hold decision makers accountable. At the same time, while the department and its components have taken or plan to take actions to further address contract management challenges, many of these actions, such as the Army’s efforts to increase its acquisition workforce, will not be fully implemented for several years. DOD will need to monitor such efforts to ensure that intended outcomes are achieved. At the departmentwide level, DOD has yet to conduct the type of fundamental reexamination of its reliance on contractors that we called for in 2008. Without understanding the depth and breadth of contractor support, the department will be unable to determine if it has the appropriate mix of military personnel, DOD civilians, and contractors. As a result, DOD may not be totally aware of the risks it faces and will therefore be unable to mitigate those risks in the most cost-effective and efficient manner. The implementation of existing and emerging policy, monitoring of the department's actions, and the comprehensive assessment of what should and should not be contracted for are not easy tasks, but they are essential if DOD is to place itself in a better position to deliver goods and services to the warfighters. Moreover, with an expected increase of forces in Afghanistan, the urgency for action is heightened to help the department avoid the same risks of fraud, waste, and abuse it has experienced using contractors in support of Operation Iraqi Freedom. Mr. Chairman and members of the committee, this concludes our testimony. We would be happy to answer any questions you might have. For further information about this testimony, please contact John Hutton, Director, Acquisition and Sourcing Management, on (202) 512-4841 or huttonj@gao.gov or William Solis, Director, Defense Capabilities and Management, on (202) 512-8365 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Other key contributors to this testimony include Carole Coffey, Timothy DiNapoli, Justin Jaynes, John Krump, Christopher Mulkins, James A. Reynolds, Karen Thornton, Thomas Twambly, and Anthony Wysocki. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Defense Contracting: Army Case Study Delineates Concerns with Use of Contractors as Contract Specialists. GAO-08-360. Washington, D.C.: March 26, 2008. Defense Contract Management: DOD’s Lack of Adherence to Key Contracting Principles on Iraq Oil Contract Put Government Interests at Risk. GAO-07-839. Washington, D.C.: July 31, 2007. Defense Contracting: Improved Insight and Controls Needed over DOD’s Time-and-Materials Contracts. GAO-07-273. Washington, D.C.: June 29, 2007. Defense Contracting: Use of Undefinitized Contract Actions Understated and Definitization Time Frames Often Not Met. GAO-07-559. Washington, D.C.: June 19, 2007. Defense Acquisitions: Improved Management and Oversight Needed to Better Control DOD’s Acquisition of Services. GAO-07-832T. Washington, D.C.: May 10, 2007. Defense Acquisitions: Tailored Approach Needed to Improve Service Acquisition Outcomes. GAO-07-20. Washington, D.C.: November 9, 2006. Contract Management: DOD Developed Draft Guidance for Operational Contract Support but Has Not Met All Legislative Requirements. GAO-09-114R. Washington, D.C.: November 20, 2008. Contingency Contracting: DOD, State, and USAID Contracts and Contractor Personnel in Iraq and Afghanistan. GAO-09-19. Washington, D.C.: October 1, 2008. Military Operations: DOD Needs to Address Contract Oversight and Quality Assurance Issues for Contracts Used to Support Contingency Operations. GAO-08-1087. Washington, D.C: September 26, 2008. Defense Management: DOD Needs to Reexamine Its Extensive Reliance on Contractors and Continue to Improve Management and Oversight. GAO-08-572T. Washington, D.C.: March 11, 2008. Defense Logistics: The Army Needs to Implement an Effective Management and Oversight Plan for the Equipment Maintenance Contract in Kuwait. GAO-08-316R. Washington, D.C.: January 22, 2008. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In fiscal year 2008, the Department of Defense (DOD) obligated over $200 billion on contracts for services, which accounted for more than half of its total contract obligations. Given the serious budget pressures facing the nation, it is critical that DOD obtain value when buying these services. Yet DOD does not always use sound practices when acquiring services, and the department lacks sufficient people with the right skills to support its acquisitions. Although DOD has ongoing efforts to improve its planning, execution, and oversight of service acquisitions, many concerns that prompted GAO to put DOD contract management on its high-risk list in 1992 remain. The committee asked GAO to address challenges facing DOD in measuring the value from and risks associated with its contracting for services. This testimony provides an overview of key concerns GAO cited in its previous reports. Specifically it focuses on (1) challenges DOD faces in following sound contract and contracting management practices and (2) recent actions DOD has taken to improve its management of service contracting. GAO has made numerous recommendations over the past decade aimed at improving DOD's management and oversight of service contracts, but it is not making any new recommendations in this testimony. DOD continues to face challenges in employing sound practices when contracting for and managing service contracts. The department has obtained services based on poorly defined requirements, used inappropriate business arrangements and types of contracts, and failed to adequately oversee and manage contractor performance. For example: (1) DOD sometimes authorized contractors to begin work before reaching a final agreement on the contract terms and conditions, including price. These arrangements, known as undefinitized contract actions, are used to meet urgent need or when the scope of the work is not clearly defined. In July 2007, GAO reported that DOD paid contractors nearly $221 million in questioned costs under one of these arrangements. (2) In fiscal year 2005, DOD obligated nearly $10 billion for professional, administrative, management support, and other services under time-and-materials contracts--contracts that are high risk for the government because they provide no profit incentive to the contractor for cost control or labor efficiency. As such, their use is supposed to be limited to cases where no other contract type is suitable and specific approvals are obtained. However, DOD frequently failed to provide such justification, and GAO's findings indicated the contracts were often used for expediency. (3) In a 2008 review, GAO found that incomplete contract files at some Army contracting offices hindered incoming contract administration personnel's assessments of contractors to make informed decisions related to award fees, which can run into the millions of dollars. These challenges expose DOD to unnecessary risk and may impede the department's efforts to manage the outcomes of its service contracts. For example, the absence of well-defined requirements complicates efforts to hold DOD and contractors accountable for poor acquisition outcomes. Use of inappropriate contract types, in addition to other factors, can result in DOD not obtaining the best value for its contract spending. Finally, failure to provide adequate oversight makes it difficult to identify and correct poor contractor performance in a timely manner. While DOD has taken some actions to respond to GAO's recommendations and congressional legislation, inconsistent implementation has hindered past DOD efforts to address these high-risk areas. To improve outcomes on the whole, DOD must ensure that these policy changes and others are consistently put into practice and reflected in decisions made on individual acquisitions. In addition, DOD needs to develop basic data about its service contracts to help inform how it contracts for services and its reliance on these contractors. GAO continues to assess DOD's efforts to implement a service acquisition management approach and the department's management and oversight of contractors supporting deployed forces. |
The Federal Aviation Administration (FAA) is responsible for developing, administering, enforcing, and revising an effective, enforceable set of aviation safety regulations that enhance aviation safety and security and promote the efficient use of airspace. Generally, a regulation is an agency statement that is designed to implement, interpret, or prescribe law or policy or to describe procedural requirements. The process by which FAA and other federal agencies develop regulations is called rulemaking. FAA’s rulemaking activities encompass all of the agency’s areas of responsibility, including air traffic control, aviation security, and commercial space transportation. FAA must address both long-standing and emerging issues in its rulemaking efforts. For example, questions about the safety of aging aircraft and the adequacy of flight duty rest requirements for airline pilots have been debated for decades. In contrast, the issues of fire safety standards for cargo compartments and the transport of oxygen generators emerged after the Valujet crash outside of Miami in May 1996. Rulemaking can be a complex and time-consuming process, and the Congress expressed its concerns about the speed of FAA’s rulemaking in 1996, when it enacted legislation that established time frames for steps in the process. While some rules may need to be developed quickly to address safety issues or guide the use of new technologies, rules must be carefully considered before being finalized because they can have a significant impact on individuals, industries, the economy, and the environment. Figure 4 provides a case study of FAA’s efforts to address a complex, long- standing aviation safety issue by creating a rule to regulate flight duty and rest requirements for flight crew members. Rulemaking involves three stages of agency activity. First, an agency identifies a need for rulemaking. Second, it initiates the rulemaking process, develops a proposed rule, and publishes it for public comment. After a public comment period, the agency finalizes the rule by considering the comments received and drafting and publishing the final rule. Figure 5 provides an overview of the process as it applies to FAA. A rulemaking issue may be identified internally or externally. For example, FAA staff may find that changes in aviation technology or operations or the emergence of a safety problem warrant rulemaking. Alternatively, the public or the aviation industry may petition the agency to develop a new rule or provide an exemption from existing rules. At the beginning of fiscal year 2001, FAA was responding to 57 petitions for rulemaking and 415 petitions for exemptions while reviewing 84 recommendations by its advisory committee—the Aviation Rulemaking Advisory Committee (ARAC). In addition, the Congress, the President, or the Secretary of the Department of Transportation (DOT) may direct FAA to develop a rule, or the National Transportation Safety Board (NTSB) may issue a safety recommendation. After a rulemaking issue is identified, an agency must consider the issue in light of its resources and other rulemaking issues that may be equally compelling. Some rulemaking issues may require study and analysis before an agency’s management can decide whether to initiate the rulemaking process and devote resources to developing a proposed rule. Once an agency has decided to initiate rulemaking, the basic process for developing and issuing regulations is spelled out in section 553 of the Administrative Procedure Act of 1946 (APA). Most federal agencies, including FAA, use notice and comment rulemaking. Once rulemaking is initiated, agencies generally must develop and publish a proposed rule or “notice of proposed rulemaking” in the Federal Register. A public comment period follows, during which interested persons have the opportunity to provide “written data, views, or arguments.” After the comment period ends, the agency finalizes the rule by reviewing the comments, revising the rule as necessary, and publishing the final rule in the Federal Register at least 30 days before it becomes effective. Most rules are later incorporated into the Code of Federal Regulations (CFR). For the remainder of this report, we will use the term “rulemaking” to refer to the notice and comment process by which FAA’s rules are developed and codified in the CFR. Rules vary in importance, complexity, and impact. Under Executive Order 12866, federal agencies and the Office of Management and Budget (OMB) categorize proposed and final rules in terms of their potential impact on the economy and the industry affected. Executive Order 12866 defines a regulatory action as “significant” if it has an annual impact on the economy of $100 million or more; adversely affects the economy in a material way (in terms of productivity, competition, jobs, environment, public health or safety, or state, local, or tribal governments or communities); creates a serious inconsistency or interferes with another agency’s materially changes the budgetary impact of entitlements, grants, user fees, or loan programs or the rights and obligations of recipients thereof; or raises novel legal or policy issues arising out of legal mandates, the President’s priorities, or the principles set forth in the order. Since 1996, significant rulemaking entries have constituted about half of all of FAA’s rulemaking entries in the Unified Agenda, a semiannual report of federal regulatory activities. Figure 6 shows the total number of FAA’s rulemaking entries and the number of significant rulemaking entries listed in the October Unified Agendas from 1995 through 2000. Significant rules often take longer to issue than nonsignificant rules. They may require extensive regulatory analyses of the potential economic, social, and environmental impacts of one or more alternatives. These analyses may take months to complete and are needed to ensure that the projected economic impact has been correctly quantified and that the costs the rule will impose on the affected industry and individuals are justified. Significant rules typically require more levels of review than nonsignificant rules. Executive Order 12866 requires that OMB review agencies’ proposed and final significant rules before they are published in the Federal Register. Moreover, clearances for proposed and final rules may be required at the departmental level for those agencies that are part of a cabinet-level department. To reduce this burden, the Federal Aviation Reauthorization Act of 1996 grants rulemaking authority directly to the Administrator, except that the Administrator may not issue a proposed or final rule without obtaining the Secretary’s approval if that rule is significant as defined by statute. The Wendell H. Ford Aviation Investment and Reform Act for the 21st Century narrowed the scope of rules that would be considered to be significant, setting the threshold for economic significance to $250,000,000 and eliminating inconsistency and interference with other agencies’ actions and material changes to budgetary impact of entitlements, grants, user fees, or loan programs and recipients’ rights and obligations as criteria. Nevertheless, agencies that report to the Office of the Secretary of Transportation (OST), including FAA, have also been required by the Secretary to submit for review all rules deemed significant under the executive order as well as rules that OST has indicated are to be considered to be “significant” under supplemental guidelines. These additional criteria increase the number of rules for which agencies within DOT are expected to complete regulatory analyses. For example, FAA published a significant rule in April 2000 that limited the number of commercial air tours permitted in the Grand Canyon. While the rule was not considered a significant regulatory action under Executive Order 12866, and would not have been significant under the statute, it was considered significant under the Department’s supplemental guidelines because the rulemaking had a potentially substantial economic impact on Native American tribes. Specifically, the rule was expected to have a significantly adverse impact on the Hualapai Tribe’s economic development and self-sufficiency, since the trive relied on income from air tour operations and tourist dollars brought to the reservation by the air tours. The additional analyses and reviews required for significant rules are incorporated into the basic process that all federal agencies use for rulemaking: developing a proposed rule, releasing the proposed rule for public comment, and developing a final rule. Various offices within FAA conduct the required analyses and reviews of rulemaking documents, as shown in table 1. In the early stages of rulemaking, each rule is the responsibility of a program office with technical expertise in a specific area. This office develops the initial rulemaking documents, as indicated in table 1. Depending on the content of the rule, the program office may be a staff office, like the Office of Chief Counsel, that also has the additional responsibility of reviewing all significant rules. Alternatively, it may be an office with responsibility for a technical area, such as the Office of Civil Aviation Security Policy and Planning. Each of these offices has managers who can become involved in the rulemaking process by reviewing the work of its representatives on a rulemaking team. Generally, FAA’s rulemaking teams consist of representatives from the program office, the Office of Rulemaking, the Office of Aviation Policy and Plans, and the Office of the Chief Counsel. In addition to significant and nonsignificant rulemaking, the staff in these offices also work on other projects, including airworthiness directives, airspace actions, and responses to petitions and exemptions. The ultimate goal of the federal rulemaking process is to develop and issue a quality rule in a timely and efficient manner. Time is of particular importance when safety is at stake or when the pace of technological development exceeds the pace of rulemaking. Many of the problems federal agencies face in developing and publishing rules are long-standing and similar across agencies, and they have been cited in studies and discussions of the process since at least the 1970s. For example, a Senate study in July 1977 cited deficiencies in decisionmaking, planning, and priority-setting by top management as causes of delay in federal rulemaking. In July 2000, DOT’s Office of the Inspector General (OIG) reviewed the Department’s rulemaking process and found that the Department had taken as long as 12 years to issue significant rules. The OIG attributed the lack of timeliness of the Department’s rulemaking partly to a lack of timely decisionmaking and prioritization. Studies specifically targeting the efficiency of FAA’s rulemaking process over almost 40 years have also identified similar problems. Figure 7 provides a list of key studies on FAA’s rulemaking process. The central findings of the most recent study of FAA’s rulemaking process, published in 1997, echoed the findings of past studies. For this report, we grouped the problems identified by the 1997 study into three areas: management involvement, administration of the rulemaking process, and human capital. In terms of management involvement, FAA’s 1997 study of its rulemaking process found that problems related to shifting priorities, the timing of management involvement, and the willingness of management to delegate authority all caused delays. Inconsistent and changing priorities among FAA offices caused false starts, delays in the process, and wasted resources. Inadequate or ill-timed involvement by FAA’s senior management hindered the agency’s ability to make timely decisions. As a result, rule drafters frequently worked without adequate direction or buy-in from policymakers, causing extensive queuing, delays, and rework. The reluctance of FAA’s rulemaking management to delegate authority caused problems in internal coordination and accountability and created extensive layers of review that delayed the rulemaking process. Rulemaking projects were also often delayed because no one was held accountable for keeping projects on schedule. The lack of coordination resulted in “finger-pointing” as to why problems remained unsolved. FAA’s 1997 study identified similar concerns with the timeliness of rulemaking efforts by FAA’s industry advisory committee. For example, the committee had too many projects, some of which were duplicative or overlapping. A lack of coordination and accountability between FAA and the committee also impaired the effectiveness of the advisory committee. In terms of process administration, FAA’s 1997 study found that confusion concerning the roles and responsibilities of rulemaking participants at FAA created difficulties in determining who had responsibility for what actions, led to breakdowns in coordination and communication, and resulted in inadequate supervision. Multiple information systems also hampered coordination and led to inaccurate tracking records and databases, as well as to information that was hard to access (e.g., archives of decisions made). Without reliable records, FAA often could not pinpoint where problems and backlogs occurred. Moreover, even when it did identify weaknesses, it lacked systems with which to evaluate and improve the process. In terms of human capital management, the 1997 study found that FAA had not established systems for selecting and training personnel involved in rulemaking. Rulemaking teams at FAA typically did not observe project schedules, which they regarded as unrealistically optimistic. Measures of timeliness were not consistently used to measure and evaluate the performance of rulemaking participants. FAA’s rulemaking process lacked a system for consistently tying incentives and rewards to specific measures of performance. Responding to concerns about the efficiency of FAA’s rulemaking process and in particular the time required for departmental review by OST, the Congress enacted legislation in 1996 designed to speed FAA’s efforts to develop and publish final rules. The Federal Aviation Reauthorization Act of 1996 amended section 106 of title 49 U.S.C. to establish a 16-month time limit for FAA’s finalization of rules after the close of the public comment period and a 45-day requirement for OST’s review of FAA’s significant proposed and final rules (see ch. 2). (The act also established a 24-month time limit for finalization of rules after publication of an advanced notice of proposed rulemaking, a request for information that FAA may issue in developing a proposed rule. Because this notice is not always issued, we did not use it as a measure in our analysis.) In response, FAA reviewed its rulemaking process, established its own suggested time frames for completing steps in the process (see ch. 3), and identified potential improvements to its process in the general areas of management involvement, process administration, and human capital management. These improvements are discussed in chapter 3. The Chairman of the Subcommittee on Aviation, House Committee on Transportation and Infrastructure, asked us to review FAA’s rulemaking process to determine whether FAA could improve the efficiency of its rulemaking process. Specifically, we addressed three main questions in our review: What are the time frames for FAA’s rulemaking, including the time FAA took to initiate the rulemaking process in response to statutory requirements and safety recommendations and, once begun, to develop and publish significant rules? What were the effects of FAA’s 1998 reforms on its process and on its time frames for completing rulemaking? How effective were FAA’s reform efforts in addressing the factors that affect the pace of the rulemaking process? To determine the time frames for FAA’s rulemaking, we created a database of proposed and final rules that constituted the agency’s significant rulemaking workload from fiscal year 1995 through fiscal year 2000. We focused our analysis on 76 significant rulemaking actions identified by FAA in the semiannual editions of the Unified Agenda or identified in our search of the Federal Register. This consisted of rulemaking actions that had either been published for public comment or were initiated but had not yet been published for public comment. The initiation dates and dates of published actions for the 76 rules are provided in appendix I. These rules constituted most (about 83 percent) of FAA’s significant rule workload and were more likely to be complex and/or the subject of controversy and potential delay. Our database contained data obtained from FAA’s Integrated Rulemaking Management Information System and from our review of proposed and final rules published in the Federal Register. In creating our database, to determine the dates that rulemaking projects were initiated, we used the dates recorded in FAA’s information system. For the dates of the publication of proposed and final rules, we used the dates of publication in the Federal Register. To determine the extent to which FAA’s rulemaking met statutory time frames, we compiled information from our database of rulemaking actions and applied standards established by the Congress in 1996. To determine the effects of FAA’s 1998 reforms on the agency’s rulemaking process, we reviewed the 1997 report on FAA’s rulemaking process and discussed the 1998 reforms with FAA staff and management from the working team that participated in the study. We discussed rulemaking reforms with rulemaking officials from several other federal regulatory agencies the Animal and Plant Health Inspection Service (APHIS), the Environmental Protection Agency (EPA), the Federal Highway Administration (FHWA), the Federal Motor Carrier Safety Administration (FMCSA), the Food and Drug Administration (FDA), the National Highway Transportation Safety Administration (NHTSA), and the Nuclear Regulatory Commission (NRC) to identify what steps they had taken to improve their rulemaking processes and to discuss their efforts to improve rulemaking. We selected these agencies because they had developed significant rules that were potentially technically complex and have an impact on public safety (e.g., regulation of nuclear power, environmental concerns, and food safety). We also compared FAA’s time frames for responding to public comment and finalizing significant rules with that of other federal regulatory agencies by collecting data from the Federal Register on the time spent processing significant rules by APHIS, EPA, FDA, and NHTSA. To determine the extent to which FAA’s rulemaking met FAA’s suggested time frames for steps in the process before and after the reforms, we compiled information from our database of rulemaking actions and applied it to the time frames suggested in FAA’s rulemaking guidance. We also reviewed the number of significant rules FAA published before and after implementing its reforms as a measure of improvement in the rulemaking process. To determine the effectiveness of FAA’s reform efforts in addressing the factors that affect the pace of the rulemaking process management involvement, process administration, and human capital management we considered case studies of specific rules, as well as the views of rulemaking officials and other stakeholders in the rulemaking process, including representatives of NTSB, OST, and OMB . We also surveyed 134 FAA employees who had served as rulemaking team members on significant rules listed in FAA’s Unified Agendas since the beginning of fiscal year 1994. We chose these employees for our survey because these staff had recent experience and were likely to be familiar with changes in the reformed process. We mailed a survey to rulemaking staff to obtain their views on the status of the rulemaking process and the impact of rulemaking reforms. We received 109 responses (a response rate of about 81 percent). A copy of the survey instrument that summarizes the responses we received is provided in appendix II. We supplemented our survey results with semistructured interviews of rulemaking team members involved in four rulemaking projects. For our semistructured interviews, we asked a series of questions designed to elicit staff members’ views on the results of the reform efforts and suggestions for improving the process. We conducted our work from April 2000 through March 2001 in accordance with generally accepted government auditing standards. The time FAA took to formally initiate rulemaking in response to a congressional mandate or a recommendation by the National Transportation Safety Board (NTSB) varied widely. Between fiscal year 1995 and fiscal year 2000, FAA initiated most rulemaking efforts in response to mandates and safety recommendations within 2 years, but some were initiated many years later. Once FAA formally initiates rulemaking, the time it takes to complete the process depends on many factors, including the complexity of the issue. FAA finalized and published in the Federal Register 29 significant rules over the 6-year period from fiscal year 1995 through fiscal year 2000. It took a median of about 2 ½ years to proceed from formal initiation through publication of the final rule, ranging from less than 1 year to almost 15 years. Twenty percent of these final rules took 10 years or more to complete. During this same time period, departmental review, one step in the process for both proposed and final rules, took a median time of about 4 months. FAA’s median pace for finalizing a rule after the close of the public comment period about 15 months was comparable to that of four other federal agencies. However, FAA met the 16-month statutory requirement for finalizing a rule after the close of the public comment period in less than half of the cases since the legislation was passed and other mandated time limits in only 2 of 7 cases. FAA initiated most rulemaking actions in response to safety recommendations from the NTSB and mandates from the Congress within 2 years. Of the 76 significant rulemaking actions we reviewed, 32 rulemaking actions (or about 42 percent) were the subject of a congressional mandate or recommendation by the NTSB. While congressional mandates may require that FAA take rulemaking actions, NTSB’s recommendations do not. However, FAA is required to respond formally to the recommendation and specify what action is or is not being taken and why. As shown in figure 8, FAA formally initiated about 60 percent of mandated rulemaking actions and about one-third of NTSB’s recommendations within 6 months. However, FAA sometimes took many years to respond to a mandate or recommendation. For example, figure 8 also shows that in one-fourth of the mandated cases and one-third of the recommendations we examined, FAA took more than 5 years to initiate rulemaking. Figure 9 provides a case study of a rulemaking issue with safety implications aviation child safety seats—in which more than 7 years passed between NTSB’s recommendation and FAA’s initiation of the rulemaking process. In this case, the delay occurred because of policy-related disagreements between FAA and NTSB. After receiving NTSB’s recommendation to require child safety seats on aircraft, FAA studied the issue. It issued a related technical order and rule but decided not to pursue rulemaking to require child safety seats on aircraft. In part, its decision was based on a study it presented to the Congress that concluded that if child safety seats were required on aircraft, passenger diversion to other transportation modes could cause a net increase in fatalities. FAA eventually changed its policy position and initiated rulemaking after the White House Commission on Aviation Safety recommended that FAA make child-restraint systems mandatory on aircraft. In contrast to the lengthy period of time that sometimes occurs between NTSB’s recommendations and FAA’s initiation of rulemaking, FAA responded within 1 month to an NTSB recommendation in 1999 to require flight data recorders on Boeing 737 aircraft. Figure 10 provides a case study of this rulemaking effort. For significant rules published during the 6-year period from fiscal year 1995 through fiscal year 2000, FAA took a median time of about 2 ½ years to proceed from the formal initiation of the rulemaking process to the publication of the final rule in the Federal Register. This time period ranged from less than 1 year to almost 15 years. Six of the 29 final rules (or 20 percent) took 10 years or more to complete. FAA took a median time of about 20 months to proceed from initiating the process to proposing the rule for public comment. It took a median time of about 15 months to finalize the rule after the close of the public comment period. The time taken for one step of the rulemaking process that occurs in FAA’s development of both proposed and final rules departmental review and approval—has been of particular concern to the Congress. In the Federal Aviation Reauthorization Act of 1996, the Congress addressed its concern by establishing a time frame for this step. The act requires the Secretary of DOT to review proposed and final significant rules and respond to FAA, either by approving them or by returning them to FAA with comments, within 45 days after receiving them. While FAA’s information system tracked the date of OST’s approval of some significant rules, it did not track the date of OST’s response to FAA’s transmittals of significant rules when it sent them back to FAA with comments rather than approving them. We were therefore unable to measure the extent to which the Department had met the 45-day requirement set forth in the 1996 act. FAA rulemaking officials said that they did manually track this information for individual rules and planned to incorporate this capability into the next upgrade of the information system. FAA’s information system did contain the dates that some rules were submitted by FAA to OST and the dates of OST’s final approval. We used these dates to measure the time it took for OST to approve FAA’s significant proposed and final rules from fiscal year 1997, when the legislation went into effect, through fiscal year 2000. Overall, for both proposed and final rules, the median time OST took to approve the rules (including review, comment, and FAA’s response, if any) was 4.1 months (124 days). Measuring proposed and final rules separately, we found that the median time OST took to approve proposed rules was 4.7 months (140 days), while the median time OST took to approve final rules was 2.3 months (69 days). In chapter 4, we discuss the views of departmental and FAA staff on issues that impact the time required for departmental approval. In a more recent effort to reduce delays related to OST’s review, on April 5, 2000, the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century amended title 106 of 49 U.S.C. by raising the dollar threshold required for secretarial approval and eliminating several criteria that triggered departmental review of significant rules. The Congress included this language to, among other things, streamline FAA’s rulemaking process by reducing the number of significant rules that had to be submitted for departmental review and approval. Because the legislation preempts DOT’s Order 2100.5 (which defines what rules FAA and other DOT modal administrations are to submit to OST for review, as discussed in ch. 1), FAA is required to submit to OST only those significant rules that meet the criteria defined in the act. At the time of our review, FAA and OST had not yet implemented the provisions of the act. As a result, the number of FAA’s significant rules that met the criteria for OST review had not been reduced. Although we did not compare the time frame of FAA’s entire rulemaking process to that of other agencies, we did find that the time FAA took to finalize rules after the close of the public comment period was comparable to that of four other federal agencies. We selected four regulatory agencies—APHIS, EPA, FDA, and NHTSA and compared the time they took to finalize rules from fiscal year 1995 through fiscal year 2000. The results are presented in figure 11. The figure shows that, except for APHIS, which finalized all of its significant rules within 2 years of the close of the public comment period, agencies generally finalized between two-thirds and three-fourths of their significant rules within 24 months of the close of the public comment period. The Federal Aviation Reauthorization Act of 1996 established a 16-month time frame for FAA’s finalization of rules after the close of the public comment period. From October 1996 through March 2001, FAA met this deadline in 7 of 18 cases by either publishing a final rule in the Federal Register or taking other final action within 16 months of the close of the public comment period. Figure 12 provides a case study of FAA’s rulemaking to prohibit the transportation of discharged or unfilled oxygen generators in aircraft. This effort exceeded the congressional time frame by about 11 months. (See app. III for a complete list of rules subject to the act’s time frames.) The Congress has also mandated time frames for steps in FAA’s rulemaking on specific issues. The agency did not meet many of these legislated time frames. Specifically, of the 20 congressionally mandated rules that were part of FAA’s workload between fiscal year 1995 and fiscal year 2000, 7 included a time frame for agency action. FAA met the time frame in only 2 cases, both of which called for initiating the rulemaking process by a certain date. Appendix IV provides additional information regarding the current status of the seven rules with congressionally mandated time frames. Figure 13 provides a case study of FAA’s proposed rule to revise procedures for aircraft registry to assist drug enforcement efforts that exceeded a specific legislative mandate by more than 10 years. To respond to congressional concerns about the timeliness of its rulemaking process and address long-standing problems (see ch. 1), FAA began implementing reform initiatives in January 1998 to improve the process in two of the three central areas we have identified: management involvement and process administration. FAA considered but did not implement most initiatives to improve human capital management. Other agencies have also implemented reforms to address similar types of problems. FAA’s median times to proceed from initiation of rulemaking through the release of the proposed rule for public comment and to finalize the rule after the close of the public comment period did not improve after FAA implemented its 1998 reforms. Despite FAA’s reforms, the time taken for departmental review and approval of FAA’s significant rules was not reduced. In addition, fewer rules were published while proposed and final rules remained in the rulemaking process for longer periods of time. FAA began implementing reform initiatives in January 1998 to improve its rulemaking process in two of the three central areas we have identified: management involvement and process administration. FAA considered but did not implement most initiatives to improve human capital management. “With the direct involvement of senior-level management in the rulemaking process, I anticipate a dynamic rulemaking program that more directly meets the safety and technology challenges of a rapidly evolving aviation industry.” In particular, to address long-standing concerns about delays that occurred during departmental review and approval of its significant rules (see ch. 2), FAA included a representative from OST on its rulemaking steering committee and management council, hoping that improved coordination would reduce the time taken for OST’s review. Table 2 shows the members and duties of FAA’s steering committee and management council. To formalize the new process and provide consistent and comprehensive guidance to rulemaking staff and management, FAA also developed a new rulemaking manual. Among other things, this manual suggested time frames for steps in the rulemaking process and established a system for the steering committee to follow in prioritizing rulemaking projects, as shown in table 3. Finally, to maximize the efficient use of employees’ and management’s time, FAA planned to limit reviews to those that added value and to delegate more responsibility for rulemaking decisions to rulemaking teams. Prior to the reforms, both nonsignificant and significant rules went through multiple layers of internal review. This practice stemmed more from agency protocol than from necessary oversight. For example, a team member’s decision could pass through sequential reviews by his or her immediate managers, office directors, associate administrators, and the Office of the Administrator. FAA proposed eliminating intermediate manager and director-level review and approval for both nonsignificant and significant rules so that rules could pass directly from teams to associate administrators. In doing so, FAA hoped to use available resources more efficiently, improve team members’ morale, and reduce delays. However, the agency stopped short of eliminating review and approval of significant rules by associate administrators, as was recommended in studies of FAA’s rulemaking in 1988, 1996, and 1997. According to officials from the Office of Rulemaking, the revised process was designed to enable the management council to delegate coordination and approval of nonsignificant rules to managers below the associate administrator level and the reform was intended to allow teams to act with the full knowledge of their respective associate administrator’s position on important issues. To address problems in administering the rulemaking process, FAA implemented a series of reforms. These reforms were primarily designed to clarify the extent and limitations of each team member’s roles and responsibilities, to improve the monitoring of rules and the management of rulemaking documents throughout the process, and to ensure ongoing evaluation of the process. Given the potential complexity of rulemaking issues, inconsistent and unclear lines of responsibility between policy, technical, legal, and economic reviews have historically slowed the rulemaking process. In its reform, FAA documented in the rulemaking manual the roles and responsibilities for each member of the rulemaking team. Specific appendixes in this manual detail the purpose, intent, and limitations of legal and economic reviews. FAA also created a new system for monitoring rule status and document management, the Integrated Rulemaking Management Information System, which was designed to increase the use of automation in the rulemaking process. According to FAA’s Office of Rulemaking, its new system consolidated the functions of the existing rulemaking tracking and document management systems. The new system was designed to also provide access to a regulatory guidance library and the DOT’s Docket Management System. Finally, FAA developed rulemaking quality standards and established a continuous improvement team and a quality team to ensure ongoing evaluation of the rulemaking process, monitor the quality of rulemaking documents, and provide recommendations on potential improvements to the process. FAA’s rulemaking quality standards are documented in an appendix to its rulemaking manual, Rulemaking Quality Standard and Guide. The guide offers practical tips, provides techniques, and suggests references and examples for rulemaking writers. FAA’s continuous improvement team—envisioned as a staff-level team—was established to review the evaluations from rulemaking teams in order to provide recommendations to the rulemaking management council on improvements to the process to be incorporated into the rulemaking manual. Similarly, the role of the rulemaking quality team—envisioned as a management-level team— was to continually monitor and improve the quality of rulemaking documents and provide recommendations to the rulemaking management council on improvements to the process. These two teams were consolidated in 1999 because FAA management concluded that the two functions were difficult to separate and that both functions would benefit from both staff- and management-level participation. To promote accountability in the rulemaking process, the working team for the 1997 study recommended a number of human capital management strategies to improve the training, evaluation, and rewarding of rulemaking staff. The team recommended that FAA provide orientation training on the new rulemaking process to all staff involved in rulemaking efforts. It also recommended skills assessment and additional ongoing training on functional skill development, conflict resolution, facilitation and consensus-based decisionmaking, project management, and team leader training. To measure efficiency and reward performance more consistently, the team recommended that FAA establish performance measures in the areas of rule-processing times, rule quality, and rulemaking productivity, as well as systems for performance evaluation. It also recommended that FAA develop a guide to clarify to supervisors the conditions to consider when granting rewards for good performance in rulemaking and to specify possible rewards. As discussed in chapter 4, FAA considered but did not take steps to formally implement these recommendations related to performance evaluation and rewards. According to the Office of Rulemaking, the staff resources needed to develop and implement these initiatives were not available because rulemaking staff and management were fully occupied with the day-to-day management of the rulemaking process. As a result, FAA relied on existing training and rewards systems. During our investigation, we also discussed rulemaking reform with rulemaking officials from several other federal regulatory agencies whose rules involved public safety to identify what steps they had taken to improve their rulemaking processes. Although an evaluation of the effectiveness of the reforms undertaken by other regulatory agencies was beyond the scope of this review, the results of our discussions with the other rulemaking officials showed that the reforms other agencies have proposed or implemented are in some cases similar to those proposed by FAA, and they generally address the same types of problems faced by FAA. For example, officials at several agencies we talked with considered management involvement a crucial element of an efficient rulemaking process. They used a variety of ways to improve management involvement, including the use of senior management councils and rulemaking coordinators. For example, EPA told us they established a regulatory policy council of senior management, as well as regulatory coordinators across EPA to manage priorities and resources. Other approaches cited by regulatory agencies we contacted included the use of agency ombudsmen and “senior champions.” Officials at the FMCSA said a regulatory ombudsman outside of agency program offices is responsible for moving rules through the process and tracking rules against established milestones. According to FMCSA officials, the ombudsman has the authority to resolve disagreements affecting timely processing; ensure sufficient staffing to meet statutory and internal deadlines; and represent FMCSA in discussions about individual rulemakings with other organizations, including OST, OMB, and other federal agencies. Rulemaking officials at FDA told us they assign a “senior champion” from the agency’s program offices to be responsible for scheduling rulemaking actions and ensuring that timely actions are taken. FDA officials said that the senior champion concept improves accountability by establishing a single point of responsibility. To reduce layers of internal review, other federal agencies have taken steps to delegate authority by limiting the amount of sequential review that takes place. For example, FDA officials said they limit a program office’s concurrence procedures and sign-off requirements to include only necessary staff. Rulemaking officials of the APHIS said that, in April 1999, they began limiting all staff organizations’ reviews of regulatory packages to 2 weeks. Finally, senior managers at the EPA said they provide flexibility to associate and regional administrators to determine what procedures to follow on a rule-by-rule basis, allowing managers more autonomy to tailor procedures to fit different needs. To better administer the rulemaking process, other federal agencies have developed automated tracking systems to monitor the progress of regulations under development, established evaluation systems for learning about delays in the process, and initiated appropriate actions to overcome internal delays. For example, FDA uses a tracking system to monitor the progress of all regulatory documents, which helps expedite the internal clearance process for regulations under development. FDA officials said that the tracking system has saved FDA time in processing regulations but had not estimated the amount of time saved. In the area of human capital management, other federal agencies cited a number of initiatives for training and performance measurement and evaluation. For example, to provide regulation writers with the training necessary to adequately prepare draft regulations, officials from APHIS encourage rulemaking staff to attend available courses and conferences or advisory committee meetings on the relevant subjects. The officials also encourage staff to seek technical support in drafting regulations and said that agencies could encourage, through incentives, technical staff to provide technical assistance to regulation writers. In addition, FDA officials suggested using a mentor program for new staff or existing staff to encourage them to consult with experienced regulation writers. Other agencies have established quality standards for their rulemaking to measure the performance of their rulemaking processes. For example, EPA measures the quality of regulatory documents and holds senior managers accountable for ensuring that regulatory actions meet the definition of a quality action. When program offices at EPA are unable to demonstrate that they can develop quality actions, fewer rulemaking actions will be assigned to them. According to EPA regulatory officials, this is an incentive for senior managers to develop quality rules. At FMCSA, officials said they had a formal structure of accountability of rulemaking products and dates in performance agreements that involve the head of the agency down to division directors. These performance agreements have specific rulemakings that include dates for which the staff is held accountable. In addition, FMCSA has a supplemental statement to the performance agreement for every staff member for rulemaking work products and dates. The median time FAA took to proceed from formal initiation of rulemaking through publication of the final rule increased from about 30 months in the 3-year period prior to the reform (fiscal years 1995 to 1997) to 38 months in the 3-year period following the reform (fiscal years 1998 to 2000). FAA’s median times for proceeding from initiation through release of the proposed rule for public comment and for proceeding from the close of the public comment period through publication of the final rule both increased by more than 3 months after the reforms. Specifically, the median time FAA took to proceed from initiation through the release of the proposed rule for public comment increased from 16.5 months in the 3-year period prior to the reforms to 20.4 months in the 3-year period following the reforms. The median time FAA took to finalize the rule after the close of the public comment period increased from 14 months to 16.3 months during the same time periods, as shown in figure 14. The time OST took to review and approve rules did not improve after FAA reformed its rulemaking process in 1998. Overall, for both proposed and final rules, the median time OST took to approve rules (including review, comment, and FAA’s response, if any) increased from about 125 days before FAA’s reforms to about 130 days after the reforms, an increase of about 5 days after the reforms. Measuring the proposed and final rules separately, we found that the median time taken for OST’s approval of proposed rules increased by 2 days, while the median time taken for OST’s approval of final rules decreased by 1 day, as shown in figure 15. Since 1998, FAA has published fewer rules. As shown in figure 14, FAA finalized 18 significant final rules in the 3-year period prior to implementing its reform. In the 3-year period following the reform, FAA finalized only 11 significant final rules. FAA rulemaking officials attributed the change in productivity of significant rules to the agency’s efforts to classify more rulemakings as nonsignificant and, thus, to decrease levels of evaluation and review within FAA, as well as to eliminate review by the Department and OMB. However, the number of nonsignificant rules the agency published from 1995 to 2000 do not reflect this. For example, FAA published almost 50 nonsignificant proposed and final rules each year in 1995 and 1996, as compared to less than 30 nonsignificant proposed and final rules each year in 1999 and 2000. In the years since FAA’s reforms, the median time that initiated significant rulemaking projects had remained in the process without being released for public comment (the proposed rule stage) increased by more than 4 years from the end of fiscal year 1997 to the end of the fiscal year 2000. At the same time, the median time that FAA’s unpublished significant final rulemaking projects remained in the process after going through the public comment period also increased, by about 5 months. This is shown in figure 16. As part of its rulemaking reform in January 1998, FAA established its own time frames for developing and publishing proposed and final rules, as shown in figure 17. Although these time frames were established as a part of FAA’s reforms and were, thus, not an applicable standard for rulemaking efforts prior to the reforms, we compared processing times for the 3-year period preceding FAA’s reforms to processing times for the 3-year period following FAA’s reforms to measure the extent of the change. The percentage of FAA’s proposed rules that proceeded from initiation through release for public comment within FAA’s suggested time frames dropped from 47 percent prior to the reforms to 19 percent after the reforms. The percentage of rules that proceeded from the close of the public comment period to publication as a final rule within FAA’s suggested time frames dropped from 39 percent to 36 percent in the same time periods. Overall, FAA did not meet time frames suggested in its rule making guidance for more than half of its proposed and final rules published, as shown in figure 18. Despite the reforms FAA made to its rulemaking process, many of the problems that have historically impeded the efficiency of rulemaking at FAA continued. Our survey of FAA rulemaking staff showed that less than 20 percent agreed that FAA has made the changes necessary to improve the rulemaking process. In addition, only about 20 percent of the staff surveyed agreed that the rulemaking process has become more efficient and effective in the last 2 years. (A copy of the survey is provided in app. II.) Our interviews with FAA rulemaking staff and management and our observations of specific rulemaking projects supported the staff's perception and confirmed that problems in the three central areas of management involvement, the administration of the rulemaking process (process administration), and human capital continued to slow the process. Problems related to three general areas of management involvement continued to slow the process. Multiple, shifting priorities made it difficult to allocate resources effectively and often disrupted the timing of the rulemaking process. Too often, policy issues were not resolved in a timely manner. Finally, multiple layers of review continued to contribute to delays. An excessive number of rulemaking priorities continued to impair the efficiency of the process. The number of projects on FAA's top priority list grew from 35 in February 1998, when FAA established the priority list after implementing its reforms, to 46 in April 2000. At that time, the Associate Administrator for Regulation and Certification said it was critical to shorten that list to a more manageable number. However, the number of top rulemaking priorities continued to increase, to 49 rules by March 2001. According to the Director of the Office of Rulemaking, the maximum number of rulemaking projects that can be effectively managed is about 30 to 35 projects. Rulemaking officials cited external and internal pressures to add rules to its priority list, noting that the agency's priorities change due to external influences such as accidents, NTSB recommendations and congressional actions and mandates. Internally, they attributed the growth in the number of priority rulemaking projects in part to a lack of commitment to the reformed process of some participants and to what they described as “parochial” views of priorities that resulted in efforts to circumvent the decisions of the rulemaking steering committee. For example, officials said that some program offices circumvented the approval process for adding rulemaking projects to the top priority list by adding projects to their own short-term incentive plans, creating pressure on the steering committee to add the rules to the top priority list. Our survey of rulemaking staff showed that less than one-third (29 percent) of the staff agreed that senior managers supported the steering committee's decisions regarding priorities. Not only were too many rules given top priority, but changes in the relative ranking of “top” priorities created problems in managing staffing resources, thereby increasing the processing time for significant rules. Eighty-three percent of the survey respondents agreed that changing priorities in the rulemaking process caused delays in the process. Team members said they were frequently pulled off of top-priority rules to work on other projects that their management considered higher priority. They noted that these disruptions created delays. It is important to note that, while some of the causes of shifting priorities stem from the current rulemaking process and can be changed, others relate to events that FAA cannot control. For example, new safety issues may emerge whenever there is an aviation accident. In addition, rulemaking efforts in progress that are related to issues such as safety threats can be overtaken by new events that then drive the agency's priorities. While the agency can monitor the effects of outside situations, there is little it can do to control them. Figure 19 provides an illustration of the impact of events on FAA's development of a proposed rule on aviation security. Although FAA cannot prevent unexpected events from influencing its rulemaking priorities, FAA's system for prioritizing rules established as part of its 1998 reform lacks explicit criteria to guide rulemaking management in establishing and ranking the priority of projects and assigning available resources. While FAA's rulemaking manual describes factors that must be considered in prioritizing rulemaking projects—such as the legislative time frames established by the Congress and projects initiated in response to special commissions and the NTSB—the policy does not define how these criteria should be ranked in order of importance. Identifying and ranking the agency's top rulemaking priorities is important because FAA's “A” list includes the Administrator's priorities as well as other top priority rules sponsored by different offices. The FAA Administrator's set of rulemaking priorities constitute about half of the “A” list of projects that are actively worked on. For example, in July 2000, 21 of the agency's 45 top priority rules were on the Administrator's list. Yet FAA's policy for determining rulemaking priorities does not establish the relative importance of the different factors that rulemaking managers must consider in determining the priority of rules within the “A” list of top priority projects. Without clear criteria for determining the rules' relative ranking and consensus among all offices involved in rulemaking, rulemaking managers may have difficulty in objectively determining, for example, whether legislated time frames take precedence over the Administrator's priorities or the safety recommendations of the NTSB. Thus, a final ranking is de facto left to the steering committee, which is made up of managers whose priorities are tied to the functions of their individual offices. One result is that managers from different offices may be more likely to allocate their staff resources on an ad-hoc, short-term basis, rather than in a strategic fashion to complete the agency's highest priority rules. During our review, the Office of Rulemaking suggested that one way of allocating staff resources to ensure that top priority rules are completed is to “dedicate” team members to work on rules until they are completed. This approach was recommended in previous reviews of FAA's rulemaking process and is used in FAA's acquisitions of air traffic control equipment. According to the Office of Rulemaking, if this approach was put into place, managers of offices involved in rulemaking activities, including the offices involved in legal, technical, and economic analyses as well as the Office of Rulemaking, would ensure that rulemaking team members worked only on the highest priority rule by dedicating their staffs to that project. FAA successfully used this approach to develop its “commuter rule” in 1995, as shown in figure 20. The Office of Rulemaking, which also cited changing priorities as an ongoing problem, noted that a continuing lack of realism in prioritizing fostered a sense of overload on the part of rulemaking staff. Our survey of rulemaking staff showed that only about 17 percent agreed that the amount of work was reasonable, allowing team members to produce high-quality products and services. Furthermore, less than one-third (about 30 percent) agreed that management from their office provided sufficient staff and resources to support and promote improvement in the rulemaking process. On some rulemaking projects, the time FAA management took to resolve complex policy issues added years to the overall time taken to complete the rule. Based on findings that sequential review and decisionmaking by management late in the process had previously caused problems such as extensive backlogs, rework, and delays, FAA intended in its reform to promote a proactive management approach in which policy decisions would be made early in the process. However, only about 28 percent of the rulemaking staff we surveyed agreed that senior management focused on the prevention of problems rather than on the correction of problems, and only 11 percent of the staff surveyed agreed that sequential processing does not impact the timeliness of the rulemaking process. Figure 21 provides a case study of a rulemaking effort related to Flight Operational Quality Assurance (FOQA) Programs in which management's inability to resolve difficult policy issues early in the process contributed significantly to the overall time the rule has been in the process. This rule has taken years to develop because of complex policy issues that at the time of our review still had not been resolved. The policy issues concern the waiving of enforcement actions for violations discovered through FOQA data voluntarily provided by airlines. Agency officials said that the reason this issue has been so difficult to resolve is that the rule could set a precedent that would affect other regulatory agencies' enforcement efforts, and it therefore has ramifications beyond the Department's efforts to improve aviation safety. As a result, they considered their rulemaking efforts to be a management success. Delays in the process caused by multiple layers of review within the agency continued despite FAA's reform efforts because the reduction in layers of review and the level of employee empowerment envisioned in FAA's reform did not materialize. In reviewing approved project records, we found that delegation of authority beyond the director's office had not been achieved in spite of FAA's plans to do so. (FAA's plans are detailed in ch. 3.) For example, in reviewing projects approved since the reform was implemented in January 1998, we found that in five of six projects not only did the directors of team members' offices review and approve team members' decisions, but so did their immediate managers and other managers. In 1997, FAA had concluded that multiple layers of review fostered a lack of accountability in the rulemaking process and that this, in turn, led to milestones that were unrealistic or not observed because final responsibility for the project was unclear. Our survey of rulemaking team members showed that few (4 percent) agreed that layers of review did not interfere with the timely processing of rules. As noted above, only 11 percent agreed that sequential processing does not impact the time required to complete the rulemaking process. Finally, a minority of the respondents agreed they had the ability to establish realistic schedules; 36 percent of the survey respondents agreed that rulemaking teams set realistic schedules, and 19 percent of rulemaking staff agreed that rulemaking teams have sufficient control over the rulemaking process to set realistic milestones. Senior rulemaking managers at FAA said that there was a fine line between employee empowerment and the need for adequate oversight, particularly for rules that were likely to have a significant economic or other impact on the aviation industry. They said that FAA's reform was not intended to eliminate managers from decisionmaking in the rulemaking process or give rulemaking teams total independence, noting that the primary focus of the rulemaking reform effort was to reduce the levels of review for nonsignificant rules. According to officials from the Office of Rulemaking, review and approval of certain nonsignificant rules that would harmonize certification requirements for passenger aircraft established by the Joint Aviation Authorities (FAA's European counterpart) have already been delegated below the level of associate administrators. Our discussions with rulemaking staff revealed a variety of reasons why they strongly disagreed that rulemaking teams had enough control over the process to set realistic milestones. Staff noted that internal management decisions to change rulemaking priorities before a project was completed and external reviews by OST caused process delays and were beyond their control. Less than 2 percent of the rulemaking team members agreed that departmental reviews improved the timeliness of the rulemaking process, and less than 15 percent agreed that departmental reviews improved the quality of rulemaking. Figure 22 shows the impact of coordination with OST on FAA's time frames in its rulemaking efforts to revise regulations governing the standards for aircraft repair stations. FAA's internal review process reflects the lack of empowerment as well. Officials from OST and the Office of Rulemaking said that the requirement for numerous layers of review reflects FAA's hierarchical management structure and that the lack of empowerment is embodied in FAA's “grid sheet” for signing off on a proposed rule. The grid sheet can involve 20 different signatures, each indicating a different layer of review. Moreover, they said that these extensive reviews can reduce accountability, noting that, because FAA requires a lot of signatures, rulemaking documents are sometimes passed through the process without FAA officials reading them. Figure 23 illustrates the multiple layers of review that occurred in reaching team concurrence for a proposed rule to require that emergency medical equipment be carried aboard certain passenger aircraft. DOT officials said that the time needed for their review and approval of FAA's significant rules can be lengthy if FAA's position is not thoroughly evaluated in terms of departmental policy early in the rulemaking process. DOT officials also cited lack of coordination among FAA's program offices and a lack of empowerment and accountability of rulemaking teams as problems that continued to contribute to delays in the process. They said that departmental review served a valuable role in ensuring that OMB's concerns were adequately addressed and noted, for example, that OST's efforts to coordinate proposed changes were hindered when FAA staff did not have the authority to make the suggested changes. Problems in three central areas related to the administration of the rulemaking process continued to contribute to delays. Significant confusion persisted regarding the roles and responsibilities of rulemaking team members. Information systems lacked complete, accurate, or current data and were inconsistently used. Finally, key elements of a continuous improvement program to identify and correct problems in the process were not in place. Although FAA attempted to address confusion over roles and responsibilities in its reforms, our survey indicated that only about 40 percent of the individuals we surveyed agreed that the “roles and responsibilities are clearly understood.” In addition, less than half (47 percent) of the survey respondents agreed that “roles and responsibilities are clearly established.” The effort by the Office of Rulemaking to define roles and responsibilities for rulemaking participants in its rulemaking manual did not appear to have eliminated confusion. As we indicated in chapter 3, the manual describes the specific roles of legal and economic reviewers. According to FAA's guidance, legal reviews should focus on the legal authority for the action proposed, compliance of the proposal with applicable laws, and whether the requirements being imposed are stated with sufficient clarity and justification to be enforced and defended in court, if need be. Economic reviews should estimate the costs and benefits of a proposed or final rulemaking. However, rulemaking management said that legal reviews continued, in some cases, to focus on nonlegal issues and that the scope of economic reviews could potentially be reduced. Senior legal staff involved in the rulemaking process noted that FAA's Chief Counsel is a political appointee whose role as advisor to the Administrator can result in the office's involvement in policy issues, as well as assessments of the quality of analyses conducted to support rules. In September 2000, we reported on the importance of information technology resources for federal agencies to gather and share information, and FAA officials cited the development of a rulemaking management information system as a major element of its rulemaking reforms. According to FAA's Office of Rulemaking, its new automated system consolidated the functions of the existing project-tracking and document- management systems. FAA's tracking of its 24 “A” list significant rules on this system has established data on rulemaking times for specific steps that should help it to monitor the rulemaking process. However, the small number of rules that it consistently tracks and a lack of agencywide implementation has made the system less useful than it could potentially be. Because FAA used the project-tracking portion of the automated system only for its “A” list of priority projects, including 24 significant rules, the system was missing complete and accurate data for many of the remainder of FAA's significant rulemaking projects. FAA rulemaking officials said that they did not have the resources available to complete, correct, or update records of rules that were not being actively worked on from the agency's “A” list of rules, citing resource limitations. However, since previously initiated rulemaking projects may be shifted onto the “A” list, historical data could be useful for measuring the performance of the rulemaking process over time. FAA rulemaking officials also noted that FAA's rulemaking policy allows teams to select milestones on a case-by-case basis. However, continuing to consider some milestones in the system voluntary may result in a lack of consistent and comparable information on rules. Without complete, accurate, and consistent data on all FAA's rulemaking projects, FAA managers will not be able to use the information system to its fullest capacity—to measure the time elapsed between specific steps in the process to identify where and to what extent delays occur over time. Since the rulemaking process can take years to complete, a longer-term management perspective on the performance of the process is essential. FAA agreed that additional performance and statistical measures should be incorporated into the reporting system to enhance its ability to manage the process and said it had begun making changes to the system. The document management portion of the automated system was limited in its usefulness because it had not been fully implemented across all offices involved in rulemaking. FAA's technology plan called for an “automation champion” to lead the initiative across all of the affected offices. However, according to the Office of Rulemaking, FAA had not designated a champion or developed a plan or goals for an integrated system outside the Office of Rulemaking. As a result, offices outside of the Office of Rulemaking had not fully implemented the new system. Although all rulemaking team members received initial training on the new system, only 26 percent of the respondents to our survey agreed that rulemaking team members were provided with training when new technologies and tools were introduced. After the initial training, we found that the system was not effectively implemented outside of the Office of Rulemaking. We reviewed the rulemaking documents in the system for four significant safety-related rules and found that since FAA's reforms in 1998, only 1 of 27 rulemaking staff outside of the Office of Rulemaking on the 4 rulemaking teams had used the automated system. This staff person used the system only twice, on the same day in February 1998. Although officials from the Office of Rulemaking said that the new system was available to all rulemaking staff, only about 23 percent of the survey respondents agreed that their coworkers used FAA's automated capabilities to record rulemaking actions. Individuals from the Office of the Chief Counsel said that they either did not have access to the automated system or that their computers were not capable of using the rulemaking software. While economists in the Office of Policy and Plans with whom we spoke had access to the system, they said that the software was too cumbersome. One economist said that he preferred to develop rulemaking documents that were inaccessible to change by other team members in order to maintain the integrity of his work product. Despite explicit efforts in the rulemaking reform to establish systems to evaluate the new process and establish quality standards and guidance, FAA had not fully implemented a continuous improvement or quality review program. The concept of continuous improvement is embodied in quality management principles as well as the Government Performance and Results Act. Continuous improvement efforts are essential for identifying problems in the rulemaking process. However, continuous improvement and quality management teams established in FAA's reforms reported problems in attempting to implement review systems. In the fall of 1999, members of the continuous improvement team expressed concerns about the purpose and authority of the team related to management's participation in establishing and supporting the evaluation function. To improve the effectiveness of the system, FAA combined the teams to include both staff and management. Despite the reorganization, little substantive work had been done in the area of process improvement at the time of our review. For example, project teams are to complete a “lessons learned” evaluation after publication of a proposed rule to document practices and procedures that worked well, identify problem areas, and determine opportunities to improve the entire rulemaking process. However, since the process was reformed in January 1998 through fiscal year 1999, we found that FAA had not documented any evaluations. The quality and continuous improvement teams were also expected to review sample rulemaking documents during the progress of a selected project, perform periodic quality assurance reviews with selected rulemaking teams, and make recommendations to the management council regarding their findings. However, none of these quality review functions had been accomplished. No evaluations or recommendations had been documented, and the rulemaking manual had not been updated since its publication in December 1998. In discussing the issue at a steering committee meeting, members of the management council attributed the lack of implementation of process improvement efforts to an inadequate level of organizational commitment to the reformed process. Rulemaking officials said that, although the continuous improvement team met on a regular basis to discuss lessons learned, the team had not documented the results of their discussions. They said they planned to incorporate the ability to document lessons learned in the next version of the management information system and that they were updating the rulemaking manual. They also said that another team, made up of managers from key offices, has met monthly and sometimes weekly to implement and improve the reformed process. Human capital management initiatives focusing on training, performance measurement and evaluation, and rewards for rulemaking efficiency and quality work were generally not implemented at the time of our review. According to the National Performance Review, which made recommendations regarding federal agencies' rulemaking processes in 1993, proper training, performance measurement, and performance incentives are needed to ensure that the agency officials involved in regulatory activities work as effectively as possible. We reported on the importance of training, performance measures, and performance incentives as key elements of an effective human capital strategy in September 2000. In preparing its 1997 report, FAA's working team recommended a series of human capital management initiatives to help rulemaking participants adjust to the revised process and foster change throughout FAA. These areas included training and skills assessment as well as performance measurement, evaluation, and rewards for rulemaking participants. Although FAA's reform plan called for orientation training on the new rulemaking process and ongoing training in a wide range of areas for all staff involved in rulemaking, rulemaking participants outside the Office of Rulemaking generally received training only on the information system software and an introduction to the new process. A formal program for continuing the training of all rulemaking team members in the areas of functional skill development, conflict resolution, facilitation of and consensus-based decisionmaking, project management, and team-leader training was not implemented. About 50 percent of the staff surveyed agreed that they received the training they needed to perform their jobs. Similarly, although FAA's reforms called for the analysis of the skills needed to function in the revised rulemaking process and to establish a mentoring program, the Office of Rulemaking had not conducted a formal analysis, and we found no evidence of such an analysis in the other offices involved in rulemaking. Only the Office of Rulemaking had established a mentoring program. Representatives from the Office of Policy and Plans and the Office of the Chief Counsel said that they had recurring training programs, but they agreed that these programs did not include a formal segment devoted to training to support the rulemaking process, as envisioned by the reform. As we reported in January 2000, a key element of human capital management is the use of performance management systems, including pay and other incentives, to link performance to results. However, in the area of rulemaking, FAA has not consistently done so for rulemaking staff and management. Although FAA's reform effort included recommendations to measure and evaluate team and individual team member performance and to develop an associated rewards system, these human capital management efforts were not implemented on a consistent, agencywide basis. According to rulemaking officials, the staff resources needed to develop and implement these initiatives were not available because rulemaking staff and management were fully occupied with the day-to-day management of the rulemaking process. As noted above, we found evidence that some individual senior managers' performance evaluations included rulemaking projects specific to their program areas. The Government Performance and Results Act of 1993 requires agencies to pursue performance-based management including results-oriented goal setting and performance measurement. Although the act gives agencies the impetus for tailoring their human capital systems to their specific missions and objectives, it is up to agencies, like FAA, to follow through on the opportunity. FAA implemented an agencywide effort to link performance with rewards in April 2000. FAA's new core compensation plan provides for pay increases tied to performance and individual contributions. Despite the opportunities provided by the new compensation system, as well as personnel reforms enacted in 1996 to provide FAA with greater flexibility in human capital management, FAA management has not established systems to measure and reward performance in rulemaking based on the quality or timeliness of the process. One measure of rulemaking performance is the time taken to complete steps in the process to develop and issue a rule. To implement rulemaking reforms, senior managers involved in FAA's rulemaking agreed that process milestones were appropriate measures of rulemaking performance. However, results from our survey of rulemaking staff indicate that, while slightly more than one-half (51 percent) agreed that milestones are used to assess the overall performance of teams, team members did not believe that using milestones is an accepted or acceptable means of measuring performance. For example, less than one half of the respondents (about 48 percent) agreed that senior management holds team members accountable when teams do not meet milestones. Only 20 percent agreed that senior management is held accountable when teams do not meet milestones. Less than 20 percent agreed that rulemaking teams have sufficient control over the rulemaking process to set realistic milestones. Only 36 percent of the staff agreed that teams set realistic schedules. Only 8 percent agreed that their offices provide incentives based on the milestones of the rulemaking process. Officials in the Office of Rulemaking suggested that one method to provide agencywide incentives for timely rulemaking would be to include a goal for the agency's timely rulemaking in the short-term incentive plans for all senior managers involved in rulemaking. The Office of Rulemaking did not develop a separate rulemaking award system as recommended by the working team. They said rulemaking awards were given based on the preexisting agency award system in which individuals and teams are recognized for outstanding performance on various projects. Although about 70 percent of the staff surveyed agreed that management from their offices provides an environment that “supports my involvement, contributions, and teamwork on the rulemaking team,” few rulemaking staff that responded to our survey agreed that teamwork is rewarded. Specifically, only 28 percent of rulemaking staff agreed with the statement “I am appropriately rewarded for teamwork in the rulemaking process (e.g., performance ratings, cash awards, certificates, or public recognition).” FAA's reforms of its rulemaking process have not fully addressed the long- standing problems that can lead to unnecessary delays because the initiatives have either not been fully implemented or their implementation has been impaired by a lack of management commitment and support. Management's attention to factors critical to achieving desired results— establishing baseline data, priorities, a plan for addressing root causes, and an evaluation system to measure the agency's progress—would facilitate effective implementation of the reform initiatives begun in 1998. FAA's management committees that were established as a part of the reform are a step in the right direction in FAA's efforts to improve management involvement, encourage timely resolution of policy issues, and reduce layers of review. Clarifying staff and management's roles in the process and including performance expectations, measures, evaluations, and rewards based on these roles is an essential step in establishing a performance system for rulemaking that emphasizes accountability and results. The system must hold staff and managers accountable for producing timely, quality rules that are needed to improve aviation safety and security. Equally essential are automated information systems to monitor the performance of the individuals and offices in the process and provide information to continually evaluate and improve rulemaking. A performance management system is a key element of an effective human capital strategy that is the best, and perhaps the only, means of obtaining the needed level of commitment and support from FAA management and staff. FAA's new Core Compensation Plan that provides for pay increases tied to performance and individual contributions offers the agency an opportunity to establish new systems for performance measurement, evaluation, and rewards based on timeliness and quality in rulemaking for all offices involved in the process. Finally, the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century provides an as yet unrealized opportunity for FAA to reduce the number of rules that must go through one of the levels of review—the review by the Office of the Secretary of Transportation. Adhering to the provisions of the act could reduce the processing time for selected significant rules that meet the criteria established in the act. To improve the efficiency of its rulemaking process and reap the maximum benefits from its rulemaking reform efforts, we recommend that the Secretary of Transportation direct the FAA Administrator to take steps to improve management involvement in the rulemaking process by reducing the number of top-priority projects to a manageable number over time by limiting the number of projects added until existing projects are completed and establishing criteria for ranking the highest priority rules so that the lowest ranked of these priority rules may be tabled if necessary to allow sufficient resources to be applied to emerging, higher-priority projects; providing resources sufficient for rulemaking teams to meet the agency's suggested time frames. One approach, suggested by the Office of Rulemaking, is to prototype the use of dedicated rulemaking teams by assigning staff for the duration of rulemaking projects. This approach would give the teams the ability to focus their efforts and manage projects to completion; holding managers at the director and associate administrator level accountable for making and supporting policy decisions as early as possible in the rulemaking process; and empowering team members by giving them the authority to coordinate with their associate administrators so that they can represent the associate administrator's policies, thus eliminating the need for the separate step of associate administrator's review and approval; empowering team members by permitting them to set their own schedules and deadlines; and holding staff and management accountable for ensuring that schedules are realistic. In addition, the Secretary of Transportation should direct the FAA Administrator to take steps to improve administration of the rulemaking process by clearly communicating the roles and responsibilities of program and support staff on rulemaking teams and holding team members and their managers accountable for limiting their reviews to established criteria; ensuring that information systems used for rulemaking tracking and coordination contain current, complete, and accurate data on the status of all significant rulemaking projects, including the time elapsed between FAA's transmission of rules to OST and the receipt of OST's comments or approval; and implementing elements of its proposed continuous improvement program and using the resulting information to identify problems in the process and potential solutions. Finally, the Secretary of Transportation should direct the FAA Administrator to take steps to improve human capital management of the rulemaking process by establishing a human capital management strategy for offices involved in rulemaking that includes providing training and support to all participants that promotes use of the agency's automated information system and collaborative, team- based decisionmaking skills, and assessing the skills of rulemaking staff and developing targeted training to better enable them to fulfill their rulemaking roles; and establishing and implementing performance measures based on expectations, evaluations, and incentives that promote timely, quality rules. One approach suggested by the Office of Rulemaking would be to include a goal for the agency's timely rulemaking in the short-term incentive plans for all senior managers involved in rulemaking. In addition, we recommend that the Secretary revise departmental policies to make them consistent with the provisions of the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century and reduce the number of FAA's significant rules subject to its review. We provided a draft of this report to the Office of the Secretary of the Department of Transportation and FAA for their review and comment. In following discussions, departmental and FAA officials indicated that they agreed with a number of the draft report's recommendations. For example, they said that FAA will take steps to ensure that the rulemaking tracking system is completely accurate and up-to-date, and includes all appropriate tracking milestones. Furthermore, they agreed that FAA will use its continuous improvement program to identify potential process improvements and will hold senior management accountable for providing policy input as early as possible in the rulemaking process. These officials also indicated that some of the draft report's recommendations will require further consideration, and that a specific response to each of the report's recommendations will be provided in the Department's response to the final report. FAA officials provided technical comments, which we incorporated into the report. The Department also provided written comments on the report, which discussed four main points about the results of the review. The full text of FAA's written comments is provided in appendix V, along with our detailed response to these comments. | The Federal Aviation Administration (FAA) issues regulations to strengthen aviation safety and security and to promote the efficient use of airspace. FAA's rulemaking is a complicated process intended to ensure that all aspects of any regulatory change are fully analyzed before any change goes into effect. During the last 40 years, many reports have documented problems in FAA's rulemaking efforts that have delayed the formulation and finalization of its rules. This report reviews FAA's rulemaking process. GAO reviewed 76 significant rules and found that FAA's rulemaking process varied widely. These rules constituted the majority of FAA's workload of significant rules from fiscal year 1995 through fiscal year 2000. GAO found that FAA had begun about 60 percent of the rulemaking projects by Congress and about a third of the rulemaking projects recommended by the National Transportation Safety Board within six months. For one-fourth of the mandates and one-third of the recommendations however, at least five years passed before FAA began the process. Once the rule was formally initiated, FAA took a median time of two and a half years to proceed from formal initiation of the rulemaking process through publication of the final rule. In 1998, FAA improved the rulemaking process and shortened the time frames for finalizing rules. These reforms included establishing a steering committee and a rulemaking management council to improve management involvement in setting priorities and resolving policy issues. GAO found that after the reforms were implemented, the median time for reviewing and finalizing a rule increased. This suggests that the productivity of FAA's rulemaking process for significant rules decreased after FAA's reforms. |
The United States of America, in 1790, was the first modern nation to undertake a comprehensive and periodic count of its population as a regular responsibility of government. But the American decennial census—mandated in the Constitution—was also a component of a new, unprecedented concept of representative government. A decennial census was an extension of colonial habits of recordkeeping born in the traditions of Europe. Old World religious institutions had long kept the vital records of their parishioners, and as early as 1611, the London Company required the residents of Jamestown to keep a record of local christenings, marriages, and deaths. A few years later, the Virginia Assembly passed its own law requiring not only the recordation of these events, but also an annual quantitative report of them. Some colonies sporadically tracked a range of data about their populations, including occupation, gender, and age, during the pre-Revolutionary period. Some of these compilations served a particular purpose, such as determining the number of military-eligible men; others reflected an English tradition of tracking population movements that developed during the great social upheavals of the Elizabethan era. Although the Articles of Confederation mandated a triennial census for taxation purposes, the revolutionary war prevented its implementation; no general census of the colonies as a whole was ever carried out. The Constitution of the United States established a new, philosophically innovative, and technically complex form of government, which in turn established a need for periodic censuses. A principal innovation of the new government was that it would be representative of the population by means of elections. One of the principal complexities the framers faced was how to make the new government representative and, particularly, how to reflect the interests of the American people both as residents of a state and as individuals. The Senate, therefore, was designed to represent the interests of the states, and the House of Representatives was designed to represent the interests of individuals. In section 2 of Article 1 of the Constitution, which concerns the composition of the House of Representatives, the framers wrote: “Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers....” If the Members of the House were to be apportioned among the states “according to their respective Numbers,” then the populations of the states had to be determined. The framers, aware that the states had already demonstrated different ideas about how to count their populations for apportioning delegates to the Continental Congress, stipulated the number of representatives for each state until a census could be taken. Furthermore, they established a requirement for the national government to undertake the census and described, in general terms, how it would be accomplished: “The actual Enumeration shall be made within three Years after the first Meeting of the Congress of the United States, and within every subsequent Term of ten Years, in such manner as they shall by Law direct.” The first census was duly taken in 1790 in a manner directed by Congress. The census was to have taken 9 months, but actually took 18 months. President George Washington believed that the count of 3.9 million people was too low. Congress, however, accepted the data and proceeded to apportion the number of representatives in accordance with the census data. Immediately, the debate about how exactly to implement the apportionment began, and this controversy has continued in one form or another for over 200 years. The Constitution did not specify in precise detail how to apportion the Representatives; it only specified that there would be a minimum ratio of 1 representative to every 30,000 of the population. It also did not fix the number of seats there should be in the House, and it was silent on how the states were to elect their representatives. So, questions arose: How many people should one Representative represent? When the population of a state divided by 30,000, or whatever divisor was ultimately selected, had a remainder, should that remainder be dropped or rounded? Should the size of the House be fixed first and then apportioned, or should an apportionment fix the size of the House? The questions about apportionment method were more than academic; depending on the answers, states could gain or lose seats in the House of Representatives. The debates over the years about methods of apportionment focused on mathematics, but the crux of the matter was political power. Not only did various apportionment methods affect individual states’ power, but they also influenced the outcome of national political debates and, over time, the balance of power between large and small states, northern and southern states, and the urban East and the agricultural/extractive West. The context of the debate was always the rapidly growing population revealed by each successive census. From the beginning, the U.S. population grew at a phenomenal rate: over 30 percent between decennial censuses until the Civil War and 15-25 percent through 1930. This growth and the simultaneous shifts in geographical concentration of the population resulted in dramatic reapportionments among states. To illustrate, in the 1920s, 91 representatives were apportioned to the Middle Atlantic states of New York, New Jersey and Pennsylvania, but by 1990, that number had declined to 65. In the 1920s, 19 representatives were apportioned for the Pacific states of Washington, Oregon, and California, but by 1990, that number had increased to 66. (Changes in the results of the apportionment of the House of Representatives between 1920 and 1990 are shown in table II.1.) In 1911, when Congress fixed the number of representatives at 435—1 per state with the rest apportioned—the census results had even greater significance. Before this decision, a state’s loss of population, and therefore of representation, was mitigated by continuing increases in the total number of representatives. Before the change, states whose population declined relative to other states did not often lose representatives, although their representatives were relatively less powerful as members of a now larger House. But after 1911, a gain of representation for any one state came only with a loss of representation for another state. Congress failed to reapportion following the 1920 Census. The failure was in part the result of a difference of opinion over the method of dividing political power. Throughout the 1920s, Congress debated which of two mathematical models for reapportionment—whose outcomes for distribution of House seats differed—would be used. In 1929, one mathematical method was selected for the reapportionment, but it was not applied until after the 1930 Census. Furthermore, the debate about apportionment methods was not over. In 1941, a different model was chosen called “the method of equal proportions.” It is still in use today. The failure to reapportion in 1920 was also a reflection of regional power dynamics. The results of the 1920 Census revealed a major and continuing shift in population from rural to urban areas, which meant that many representatives elected from rural districts resisted reapportionment. Also, the growing number of immigrants entering this country had some impact on population shifts. Delay followed delay as rural interests tried to come up with mechanisms that would reduce the impact of the population shift. Congressmen from rural areas that would lose seats to more urbanized areas simply blocked passage of reapportionment legislation for 9 years. During the congressional debates on Pubic Law 71-13, which was enacted in 1929, language requiring that districts be composed of contiguous, compact territory and contain the same number of individuals was deleted. Therefore, the reapportionment law that finally passed in 1929 was silent on the subject of rules for how the states were to establish districts to elect their representatives. As a result, some states simply stopped redistricting, despite major changes in the internal distribution of their populations over time from rural to urban to suburban. A process of malapportionment—meaning establishment of districts containing unequal population sizes—continued unchecked for decades. The difference in population size among congressional districts increased, setting the stage for the debate that started in the 1960s and continues today: What are the standards for population size in and shape of congressional districts? The federal courts, which had declined for 40 years to rule on malapportionment cases, revisited the issue in the early 1960s, and ultimately the Supreme Court accepted the argument that “one-man, one-vote” meant that districts had to be of equal size. More recently, issues concerning the ethnic composition of those districts and their physical shape have arisen. Interpretations of the Voting Rights Act of 1965, as amended, led to the creation by 1990 of over 50 congressional districts configured in order to achieve a majority of minority population. For a time, there was no limit to the peculiarity of the shape of a district created in the process of meeting the goals of the act. Now, the courts are reconsidering limits to the eccentricity of the shape of a district. Census data have been essential to the late twentieth century debates on the division of representational power, just as they were to the first apportionment debate in 1791. Block-by-block census data were essential to the 1960s goal of creating districts of equal population size. The efforts in the 1990s to achieve majority minority districts relied on census data on the ethnicity of the people in specific blocks. The debate over how best to meet the constitutional objective of representative government continues today, with the census still at the core of the debate. The census, according to Article I, section 2, of the Constitution, was also to be used to apportion any direct taxes levied by the federal government. The founding fathers purposefully linked the two. Their thinking was that any incentive for a state to boost population in order to gain additional representation would be offset by the disincentive of raising its tax burden. Direct taxation, however, was enacted only twice—once in 1798 to try to diversify the federal government’s reliance on tariffs and customs duties and once to finance the War of 1812. Both taxes were based on the value of land, houses, and slaves, and both were difficult to assess and collect. While this authority has never been repealed, direct taxation based on a decennial census never became practical. While apportionment is the most widely known use of census data, the data are also used for congressional redistricting, managing federal agencies, and allocating federal funds, and are disseminated to state and local governments, academia, and the private sector as well. Data from a decennial census provide official, uniform information gathered over the decades on this country’s people and their social, demographic, and economic characteristics. They provide the baselines for countless other surveys and are used to develop sampling frames for a number of other federal data collections, such as the Current Population Survey, which is used to measure participation in the labor market and unemployment rates. The Constitution does not require that states use the census data to redraw the boundaries of congressional districts following a change in the apportionment of representatives, but most states have always used the census data for this purpose. The general perception of the impartiality of the Bureau and the great cost and administrative effort required to take a census have been strong arguments in favor of using the Bureau’s data. In addition, the ready availability of census data is important because redistricting generally has been required shortly after the census data are made available. In recent years, immediate and detailed population data have become especially critical because some states have court-ordered deadlines to complete redistricting. The decennial census is a cost-effective method of providing baseline and trend data for use by federal agencies and various other census stakeholders, compared to the alternative of multiple data collections by other federal agencies for their own purposes. Decennial census data and data from other Bureau surveys assist federal agencies in managing their unique mission responsibilities. Federal agencies can use Bureau data to assist in evaluating established programs, identifying the particular geographic area of the county where success or problems are occurring, planning corrective actions, and later determining if their corrective actions were effective. For example, Bureau data can assist federal agencies in managing programs under the Government Performance and Results Act. Under this Act, agencies must measure their performance against the goals they have set and report to Congress and the public on how well they are doing. Federal agencies often turn to census data in managing their programs because it is mandated by legislation or regulation. The use of census data is a legal requirement in some federal programs. For example, the Department of Housing and Urban Development (HUD) is required to use Census data as the basis for allocating funds for the Community Development Block Grant Program (42 U.S.C. 5302). Without these data, HUD would be unable to meet legislatively mandated requirements because there is no other source of data for the geographic level needed. The distribution of federal revenues in order to meet national socioeconomic objectives started in the late nineteenth century with an appropriation to each state to establish agricultural experiment stations at land grant colleges. In the early decades of this century, Congress gradually expanded its provision of federal assistance. During the mid-1930s, as New Deal programs, including Social Security, expanded to account for roughly one-third of the federal budget, the need for greater detail and higher quality census data increased. To this day, census data remain an important element in the allocation of federal aid to state and local governments, and with billions of dollars at stake, the data are scrutinized intensely for accuracy. For fiscal year 1998, funding estimates indicate states should receive about $170 billion in aid through 20 federal programs that used census data, in whole or in part, to allocate that aid. The largest of these programs is Medicaid, which plans to distribute about $104.4 billion in fiscal year 1998, followed by the Federal Aid Highway Program at $20 billion, and $7.5 billion under Title I grants to local education agencies. Census information is important to the distribution of these federal funds, though generally it is not the sole factor in allocation formulas. The decennial census produces data that states use not only to determine boundaries for congressional districts, but also to establish boundaries for smaller jurisdictional divisions. The census is also a rich source of data to help county and city governments plan for and provide services. The data help them answer questions such as the following: • Will the population of preschoolers in the various school districts warrant building additional elementary schools? • Are local transit systems reaching the people likely to use public transport, • Where and when should the next senior citizen facility be built? Without federal census data, state and local governments would have to undertake their own censuses, a costly alternative given the federal government’s experience and economies of scale. Businesses use the aggregated census data available to them to plan for and provide their services and goods. Census data about population trends help businesses succeed—and provide jobs in the process—by alerting them to opportunities to provide new services and products and to tailor existing ones to demographic changes. Census data also help businesses efficiently target their advertising dollars. A free sample, for example, of a magazine focused on the interests of Hispanic readers can be distributed based on information at the census block level. Companies also use population data to locate new stores where they expect likely consumers to be, as well as to locate production facilities where they can expect to find a suitable labor force. “The actual Enumeration shall be made,” according to the Constitution, under Article 1, section 2, “... in such Manner as they shall by law direct.” In effect, this has enabled Congress to adjust decennial census procedures allowing for changes in American society unforeseen in 1787. Congress responded by delegating the census-taking to executive branch agencies while maintaining overall responsibility and periodically enacting legislation affecting census-taking methodology. While changes to census-taking methodologies have occurred, one constant—the focus on identifying households and enumerating people within them—has stayed the same. Since the 1790 Census, American society has constantly changed, thereby necessitating changes in the methodology of enumeration in the decennial census. Among the most significant societal changes have been: 1. Increased population mobility: Although westward-bound frontier pioneers were difficult to count in the late-eighteenth and nineteenth centuries, the number of mobile Americans today is much greater, increasing problems for census-taking. Short-term renters, “snowbird” retirees, students splitting their residence between home and college, and young urbanites rotating temporary residences are a few of the modern phenomena that have created a population mobility unimagined in 1787. During the period 1990 to 1994, 17 percent of the American population on average changed residences each year.2. Varied domestic arrangements: Households have always been the major focus of census enumeration. In eighteenth century America, nearly all citizens identified themselves with a household whose members were almost always related by blood, marriage, or through regular employment, and therefore included servants, slaves, apprentices, and resident farmworkers. Most people lived in a family-occupied dwelling that was headed by a male readily able to provide a count and characterize the members of his household. Today, divorce, cohabitation without marriage, and group housing, among other domestic arrangements rarely heard of in 1787, make the determination of whom to count and where to count them increasingly complex. From 1970 to 1990 alone, the number of American households grew 47 percent, while average household size shrank from 3.1 to 2.6 people and nonfamily households grew by 128 percent.3. People of varied linguistic backgrounds: The heads of households to whom the census questions were posed in the late eighteenth century came overwhelmingly from Western European cultural traditions and spoke a limited number of Western European languages. Today, the U.S. population includes people from a great variety of countries, and language barriers pose significant challenges in taking the census. To deal with this diversity, in 1990 the Bureau had questionnaire guides available in over 32 languages and had enumerators able to speak about 50 languages. 4. Increased concerns about privacy: As a result of changing attitudes toward government in general, concerns that census information will be passed to other government agencies, and fears of further loss of privacy in the computer age, the rate at which the population voluntarily responds to requests for census information has declined. For example, mail response (considered to be the most reliable and cost-efficient means of obtaining census information) declined from 78 percent in 1970 to 65 percent in 1990. (Chapter 3 of this report discusses the Bureau’s efforts to mitigate these privacy concerns.) For the first census, Congress delegated the 17 U.S. marshals and their 650 assistants to undertake the census, and gave them 9 months to do so and report the results by district to the President. For each of the succeeding five censuses, Congress passed a new piece of legislation. These censuses were similar to the first, except that the questions to be asked grew in number with each decade. During this 50-year period, Congress directed that some refinements to census-taking be made: the tallies were passed to the Secretary of State starting in 1800, and enumerators used printed schedules for the first time in 1830. Congress also continued to authorize a small clerical staff in Washington whose function was simply to check for clerical errors in the work and compile the tabulations. In 1850, Congress created a new management structure for administering the decennial census, which was becoming an increasingly complex undertaking as more sophisticated questions were being asked of a growing population spread over a wide geographical area. Congress created a Census Office and authorized a superintendent of the census at a salary of $2,500. Congress also determined that the 1850 law would govern future censuses should Congress fail to pass authorizing legislation. This was done to avoid the potential for a disruption in the census-taking schedule and possible congressional deadlocks over particular issues. In the last decades of the nineteenth century, Congress began to delegate more responsibility to the Census Office, which moved beyond clerical functions and gained authority to control the field administration of the census and appoint or approve the appointment of supervisors and enumerators. Until this time, the appointment of those staff had been a matter of political patronage. Furthermore, despite the fact that census activities took almost 7 years to complete in the late nineteenth century, the census offices that Congress authorized every 10 years closed when the work of each successive census was done. In 1902, Congress established the Bureau of the Census, under the Department of the Interior, as a permanent agency that, for the first time, would not disband between censuses. The Bureau was transferred to the newly created Department of Commerce and Labor in 1903. By 1913, the Census Bureau was under the authority of the Department of Commerce and had gained its role as the preeminent census, survey, and statistical agency of the United States, which it remains to this day. The Bureau not only conducts the decennial census as it did in its early history, but also about 200 other censuses and surveys. While legislation passed in 1850 made a new authorization for each decennial census unnecessary, Congress continued to pass legislation every decade for implementation of upcoming censuses. In 1954, title 13 of the U.S. Code was enacted to establish the basic rules for the taking of future decennial censuses, including the following: • The census, as required for apportionment, must be completed and reported to the president within 9 months after the census date of April 1; the Secretary of Commerce must submit to the committees having legislative jurisdiction over the census, not later than 3 years before the next census, the subjects proposed to be included in the coming census and the types of information to be compiled; and the Secretary must submit to the committees having legislative jurisdiction, not later than 2 years before the next census, the planned questions to be included. Although Congress delegated responsibility in title 13 to the Secretary of Commerce to undertake a decennial census “in such form and content as he may determine,” Congress has maintained authority and responsibility under the Constitution for directing the decennial census. Congress exercises a continuing role in overseeing the conduct of the census through a number of congressional committees, including for authorization, the Senate Governmental Affairs Committee and the Subcommittee on the Census of the House Government Reform and Oversight Committee, and for appropriations, the Commerce, Justice, State, and the Judiciary and Related Agencies subcommittees in the House and Senate. While these committees and subcommittees provide general oversight, Congress enacts legislation from time to time that contains specific additional direction to the Secretary. For example, in 1994, in order to facilitate development of accurate address lists, Congress enacted the Census Address List Improvement Act of 1994 that allowed the Bureau and the U.S. Postal Service to exchange address list information under certain conditions. The Bureau and its predecessor entities have always been responsive to congressional direction, but they have also been influenced by the many users of its statistics. In the nineteenth century, state governments, scholars, business associations, and reformers were among those who influenced the questions contained in the schedules, and the censuses provided them data that helped them in their various endeavors. Professional statisticians have been and continue to be influential in the Secretary’s determination of the form and content of the questions, as well as in decisions concerning the presentation of data. In the late twentieth century, the influence of various interest groups has had an effect on the census. Advocates for the homeless spurred the Bureau’s efforts in the last several censuses to count people who live in shelters and on the streets. In the 1970s, the Bureau created several advisory committees of experts involved with minority issues. Recently, racial and ethnic groups urged the Office of Management and Budget (OMB) to reconsider the federal standards on race and ethnicity classifications; their efforts resulted in the 1997 changes to those standards, which will allow individuals to choose more than one racial category when completing their census questionnaires. The plans for the tabulating and reporting of these new racial categories by the Bureau continue to be a much debated issue. The Constitution identified who should be counted in the decennial census in Article 1, sction 2,with the following language: The count “shall be determined by adding to the whole Number of free Persons, including those bound to Service for a Term of Years, and excluding Indians not taxed, three fifths of all other Persons .” Although the framers were specific about how to count (or not count) Native Americans and slaves, they were not specific about whom to count. Only one important criterion for eligibility was established: “persons” rather than “citizens” were to be counted, meaning citizenship was not to determine who should be counted. There was little reason to be more specific since the population in the 1780s was relatively homogenous, stationary, monolingual, and organized in stable household units. In the years since the framing of the Constitution, however, many of those conditions have changed, posing new philosophic and pragmatic issues. Among the changed conditions are the following: 1. Illegal aliens: No one in 1787 was an illegal resident because the first laws controlling immigration were not passed until 1875. The Immigration and Naturalization Service (INS) estimated there were approximately 5 million illegal aliens residing in the United States as of October 1996, with approximately 275,000 to 300,000 illegal aliens arriving yearly. 2. Temporary and seasonal workers: Nearly everyone who made the journey to America to work in the eighteenth century stayed for years. Many were bound to do so by bonds of indenture or slavery, while the broken ties with the distant homeland and the high cost of returning encouraged others to stay. In l990, the INS reported about 140,000 foreign citizens working temporarily in the United States in a variety of occupations. 3. Homeless people: Paupers without family to assist them and depending therefore on the public benefice were few in number in the 1780s, and they were generally lodged at public expense in a household where they would be counted. In contrast, the Bureau counted over 230,000 homeless persons during the 1990 Census. The number of homeless with no fixed address, however, is a matter of conjecture, with other estimates ranging from 800,000 upward.4. Foreign visitors and U.S. citizens living abroad: While short-term travel was not unheard of in the late eighteenth century, the number of Americans living abroad and the number of foreign citizens visiting in the United States were insignificant. In contrast, throughout 1990, approximately 16 million foreign citizens visited the United States for business and/or pleasure. Furthermore, on Census Day in 1990, about 1 million federal civilian and military employees were living and working abroad. (The Bureau’s residency rules generally do not include in the population count either Americans living abroad who are not federal or military employees, or foreign visitors to this country.) The lack of specificity in the Constitution about who should be counted has raised questions over time about the eligibility of certain categories of people. When Congress passed the 14th amendment in 1868, which modified Article 1, section 2, to eliminate the language concerning slaves and indentured servants, Congress debated whether to change the definition of those to be counted from “persons” to “citizens” or “voters,” but decided to keep the original language. The effect of legislation and court decisions over the past centuries is that the language of Article 1, section 2, is read at its most inclusive. All persons who are resident in the United States on Census Day, whether here legally or illegally, are to be counted. The decennial census never simply counted heads. Since the earliest days of the republic, Congress directed the Bureau or its predecessors to gather additional information as it enumerated the population. In the nineteenth century, the trend to greater numbers of questions, which peaked with an encyclopedic number in 1890 on a large variety of issues, was inspired by the curiosity of a self-conscious young nation and by the need to form public policy. In the twentieth century, the census questions have been increasingly shaped by the need to fulfill the data requirements of programs legislated by Congress and to properly allocate the federal funds authorized by those programs. Even before the first census was taken in 1790, Congress considered asking a range of additional questions, including one which would determine individuals’ military eligibility. After debate, however, Congress authorized enumerators to pose six questions: the name of the head of each family, the number of free white males over 16 and under 16, the number of free white females, the number of other free persons, and the number of slaves. The 1800 and 1810 Censuses made further distinctions among the ages of the free white respondents, and the 1820 Census added distinctions for age and sex of the slave and free black populations and also broke new ground in collecting basic information about people’s occupations. The 1830 Census added a count of the numbers of deaf, dumb, and blind household members, and the 1840 Census added questions on literacy, schooling, and revolutionary war pensioners. This first period of census-taking reflected the concerns of a new nation absorbed in its political experiment and identity. In 1850, the question of what to ask in the census became highly political as the nation debated how to handle the coming crisis between the northern and southern states. The focal point of the debate was what level of detailed information to gather about slaves, but the debate became a debate on the census itself and what was the proper reach of the federal government. At the same time, new questions were asked that gathered information about schools, crime, churches, and pauperism. A growing national awareness about the changing ethnic composition of the American population was reflected in the census questions. A question about unnaturalized foreigners had been posed in the 1820 Census. For the 1850 Census, a question was asked on the householder’s place of birth by the identification of the state, territory, or country where born and the birthplace of parents. Immigration, and particularly immigration from southern and eastern Europe, became a critical issue in American politics in the last decades of the nineteenth century and the first decades of the twentieth century, and the answers to census questions became a part of the debate. For the 1910 Census, respondents were asked to identify their mother tongue in a further effort to determine individuals’ ethnic backgrounds. In 1921, such information, gathered over the decades, was used when Congress enacted legislation that ended America’s historic policy of open immigration. The law limited immigration to 500,000 people per year and was to limit the percentage of immigrants from any country to their proportional representation in the 1910 Census. That law’s 1924 successor, the National Origins Act, further cut immigration levels and returned to the 1890 Census as the basis for immigration quotas. Until the 1930 Census, the details of the questions on the form were specified minutely by Congress. In the 1929 law authorizing the 1930 Census, Congress specified areas to be investigated but, for the first time, left the exact questions to the Bureau. Unemployment was one of the areas that Congress directed the Bureau to investigate in the 1930 Census. As the economic crisis of the 1930s wore on, the need for more information with regard to the population’s socioeconomic condition increased as legislators and government officials at the federal and state levels evaluated existing programs and planned new efforts to deal with the Depression. The 1940 Census of Population and Housing included questions on income, internal migration, and Social Security status, as well as more refined questions on unemployment. In addition, Congress authorized a new set of questions about the types of plumbing, heating, and appliances in people’s dwellings. It became apparent prior to the 1940 Census that the amount of information the Bureau was required to collect had come to exceed the Bureau’s ability to gather and tabulate it in an accurate and timely manner. As a result, the Bureau developed a new methodology for the 1940 Census and included supplementary questions that were asked of only a portion of the population. The Bureau’s statisticians used the data to extrapolate to the general population. For the 1950 Census, the Bureau moved toward limiting the decennial census’ primary focus to population, demographic, and housing questions. Many questions, such as those concerning unemployment, moved to separate surveys and censuses, often done at more frequent intervals. Today, the Bureau administers about 200 surveys related to various economic and demographic issues. In 1960, the Bureau began to move toward the mail-out/mail-back census questionnaires that we know today in order to eliminate enumerator bias. The nature of the population and housing questions remained relatively constant from 1960 to 1990, with many supplementary questions being asked of only a portion of the households. In 1960, for example, the Bureau asked 7 population and 14 housing questions on a short form questionnaire and posed an additional 28 population and 30 housing questions on the long form questionnaire sent to 25 percent of the households. For the 2000 Census, an effort has been made to reduce the number of questions and hence the burden on respondents. The short form questionnaire is currently designed to have 5 population and housing questions and the long form questionnaire, which the Bureau plans to send to 17 percent of the population, is currently designed with 45 additional questions. The census has collected data on race and ethnicity in a variety of forms for 200 years. Since the l960s, data on race and ethnicity have been used extensively in civil rights monitoring and enforcement, covering such areas as employment, voting rights, housing and mortgage lending, health care services, and educational opportunities. Over the last several years, however, the form of those questions has been a topic of considerable debate within American society. In the mid-1970s, OMB collaborated with other federal agencies to standardize racial and ethnic data collected and published by the federal government. The result was OMB’s 1977 Statistical Policy Directive No. 15, which provided for classifications based on four races—American Indian or Alaskan Native, Asian or Pacific Islander, Black, and White; and one ethnicity—Hispanic Origin or Not of Hispanic Origin. These classifications applied to all federal government data collection efforts and were often used by state agencies. The Bureau used these standard classifications too, although in the 1980 and 1990 censuses the questionnaire also provided an “other response” category selection and a place where a respondent could write in another category. In addition, the Bureau’s long form or sample form gathers more information on ancestry. During the 1990s, these standards came under increasing criticism from people who believed that the minimal categories set forth in Directive 15 did not reflect the increasing diversity of the American population resulting from growth in both immigration and interracial marriages and who, therefore, urged changes in the standards. Other people, however, feared that changing the categories would decrease the number of officially designated members of some racial and ethnic groups, which might decrease the distribution of federal dollars devoted to the programs designed to benefit those groups. An interagency group was convened in March 1994 to consider proposed changes to the names of the groups, as well as several suggested additions to the categories for race and ethnicity. Another suggested change the group considered was the addition of a “multiracial” category. On October 30, 1997, OMB issued revisions to the standards of Directive 15. The revised standards, which are to be used by the Bureau for the 2000 Census, have a minimum of five categories: American Indian or Alaska Native, Asian, Black or African American, Native Hawaiian or Other Pacific Islander, and White. Two categories remain for ethnicity: Hispanic or Latino and Not Hispanic or Latino. OMB also directed that forms, including the 2000 Census questionnaire, must tell respondents to “select one or more” categories to identify themselves. By choosing multiple categories, respondents can indicate a multiracial identity. (For information on the percentage of population by race and ethnicity, see table II.2.) The United States has always based its censuses on an enumeration of persons residing in households as reported by one of its members. This self-enumeration method reflects an American commitment to a minimally intrusive government and respect for individual privacy. In contrast to this method, China requires its people to report to local government offices to register their existence, and Norway and Denmark consolidate the records of various government agencies to determine a population count. In December 1997, the government of Turkey conducted its latest quinquennial census whereby the entire population is counted manually in one day over a 14-hour period. Citizens were required to stay home and be counted under threat of punishment if found in public without special permission. Starting in 2000, the Turkish government plans to change to more modern statistical procedures. For censuses prior to 1960, enumerators went door-to-door posing census questions and recording the information. While there was no standardized recording process before 1830, thereafter enumerators used a variety of standardized census forms to record the respondents’ answers. With advances in technology, the compilation of information for each new census became more sophisticated. After some testing in the 1960 Census, the Bureau began in earnest during the 1970 Census to move away from the door-to-door census. Instead, it began mailing census questionnaires to households to be filled out and returned. Should households fail to mail back the census questionnaires, enumerators follow up with telephone calls and door-to-door visits. As a last resort, enumerators solicited census data from knowledgeable people, such as an addressee’s building superintendent, letter carrier, or neighbors. These last resort data ranged in detail from just the number of people in the household to all the information requested on the census questionnaire. It should be noted that nonresponse follow-up activities have become more and more challenging as the public becomes less responsive to census questionnaires. It should also be noted that even though this census-taking methodology has existed for just the last three censuses, it is generally referred to as a traditional census. In addition to the nonresponse follow-up, three other particularly challenging activities in conducting the last three censuses were (1) identifying or obtaining correct addresses for households, (2) enumerating people in nontraditional housing, and (3) encouraging public participation. Finding all households and being able to geographically pinpoint their locations are important parts of the decennial census. It is the persons residing in those households who make up the population counts of the United States, and it is the locations of households that provide the population counts for smaller geographic areas, such as states, congressional districts, counties, cities, and towns. In 1970, the Bureau changed its primary census-taking methodology from door-to-door enumeration to mail-out/mail-back. With this change, the association of mailing addresses with households’ locations became more important. The Bureau’s strategy of pinpointing the physical locations of households, or geocoding, continues to be important for 2000 for such procedures as nonresponse follow-up. For prior censuses, the Bureau constructed a new address list from scratch. For urban areas, the Bureau started with address lists purchased from commercial vendors; for suburban and rural areas, Bureau employees made a physical reconnaissance. Many aspects of American life make accurate identification of housing units difficult, including rapid suburban and rural housing construction, urban demolition and conversion, and the mobility of some housing units, such as mobile homes and recreational vehicles. Locating the housing units of such diverse types in a country with an anticipated 118 million households in 2000 will be a labor-intensive and expensive task. For the purpose of the census, every housing unit address is geocoded to a census block whose size varies but which generally contains 30 to 85 people. Through 1980, the Bureau relied on maps that had hand-plotted census block boundaries and had new streets and features drawn in by temporary employees or enumerators. The resulting maps could be rough and hard to read. By the 1990 Census, a new computer-generated mapping system—called the Topographically Integrated Geographic Encoding and Referencing System (TIGER) was in place. TIGER is designed to locate every housing unit on 1 of the 7 million TIGER maps representing each census block. The TIGER maps can be easy to update and can be printed off the database by many users other than the Bureau. In the past, the Bureau has not used U.S. Postal Service address lists to develop its own list for several reasons. There was concern about protecting individuals’ privacy, and the Postal Service was prohibited under title 39 of the U.S. Code from sharing its lists. In addition, the Postal Service lists may not conform to the Bureau’s specialized needs. For example, the Postal Service’s addresses are for mail delivery points and may not differentiate between more than one household at an address, whereas the Bureau needs this household differentiation at all addresses. Furthermore, Postal Service post office boxes or RFD addresses (which may not indicate the actual location of a residence) cannot be used by the Bureau because questionnaires must be delivered to actual household addresses in the event follow-up becomes necessary. Nonetheless, Congress and the Bureau have recognized that cooperation with the Postal Service can alleviate some of the cost and labor burden in preparing for a census. In 1994, Congress passed the Census Address List Improvement Act, which allows the Postal Service to share information with the Bureau. The Postal Service now notifies the Census Bureau of new and newly-obsolete addresses. The Census Bureau also provides the local governments with a list of addresses in particular locales so that they can point out discrepancies with their own information. To protect privacy, however, the act specifies that only officials designated as census liaisons can handle the Bureau’s copy of their jurisdiction’s address list, which does not contain names of residents at the addresses, and that the liaisons are prohibited from disclosing address list information or using it for local purposes, such as identification of illegal housing. For the 2000 Census, the Bureau is planning to rely on a Master Address File (MAF), which is to be developed, in part, from the Bureau’s 1990 Census address list and the most recent Postal Service address list (referred to as the delivery sequence files). Under a reengineering plan approved in September 1997, the Bureau also plans to conduct a 100-percent canvass of all census blocks in early 1999 and will request the Postal Service to validate the city-style addresses prior to the delivery of 2000 Census questionnaires. For the 2000 Census, the Bureau is not going to rely on the Postal Service to deliver questionnaires to non-city style addresses as it did in 1990. Instead, the Bureau is planning for enumerators to deliver the questionnaires and ask that they be mailed back. Furthermore, the Bureau has no plans to purchase addresses from commercial vendors as it did in the prior three censuses. Vendors’ lists were found to be less accurate in low-income areas, which are not a high priority for companies selling goods and services. Because the Bureau’s basic data collection method revolves around households, counting people who do not live in traditional households can be especially difficult. Such people live in group quarters, such as shelters for battered women and the homeless, nursing homes, college dormitories, migrant worker camps, and military installations, as well as in remote areas. It takes special efforts to count these individuals. In the 1990 Census, the Bureau tried a Street and Shelter Night program to count the homeless wherever they could be found on a particular night. In 2000, the Bureau will focus its efforts to count the homeless on the places where many of them come for services, such as shelters and soup kitchens, as well as targeted outdoor locations. The emphasis will shift from finding the homeless on street corners to identifying them through the organizations that assist them. Other nontraditional procedures include cooperation between the Bureau and the Department of Defense and the U.S. Coast Guard to count individuals on military installations. Another special operation will count highly transient individuals living at recreational vehicle parks, commercial or public campgrounds, and marinas. Remote areas of Alaska will be enumerated in mid-February, a time when the difficult travel to these areas by dogsled and snowmobile is somewhat easier, rather than on April 1. Another way that the Bureau plans to count individuals in nontraditional households is by making census questionnaires available at public places, such as post offices and community centers. In this way, people who did not receive a mailed questionnaire will have a greater chance to be counted. This new approach does introduce a higher risk, which the Bureau continues to assess, of multiple responses for a given household or person. “Unduplication” formerly required a massive clerical operation, but now the Bureau expects that advances in computer storage, retrieval, and matching, along with image capture and recognition, will give the Bureau a much greater ability to eliminate duplicative responses. Several lawsuits alleging undercounts of the homeless were filed against the Bureau following the 1990 Census. Despite the efforts planned for the 2000 Census, the count of the homeless and other people living in nontraditional households is likely to be less accurate than for those living in housing units that can be plotted on a TIGER map. Issues of possible undercounts of people living in nontraditional households will likely surface again in the 2000 Census. In 1970, 78 percent of the households who received a mailed questionnaire fill it out and returned it; in 1990, that percentage dropped to 65 percent. Based on the response rate for other surveys in the meantime, the mail return rate for the 2000 Census could be even lower. This decline in the mail response rate poses not only an enumeration challenge to the Bureau, but also a major financial problem. The cost of eliciting responses from the 34 million households that failed to return their questionnaires in 1990 was $730 million. Nonresponse follow-up was one of the most costly operations of the 1990 Census. Encouraging voluntary public participation, therefore, is a major objective of the Bureau. Lack of awareness of the census was not a major problem in the 1990 census. Apathy, concerns over loss of privacy, and fears that census information might be shared with other government agencies, however, were major impediments to achieving high rates of returned questionnaires. To encourage the public to mail back the questionnaires, the Bureau spent approximately $75 million on promotion and outreach in 1990, and received pro bono promotional services valued at $65 million from the Advertising Council—a nonprofit organization that administers public service advertising campaigns. The Bureau reached the public through the media and through coordinated efforts with local and state governments, national and community organizations, and business and religious entities. However, because the Bureau had little control over when or where the Advertising Council disseminated the Bureau’s message, it has decided to use a paid advertising campaign in 2000 to complement its continuing efforts with its organizational partners. The Bureau estimates the cost of all outreach and promotion activities will be about $230 million for the 2000 Census. In order to improve the mail response rate in 2000, the Bureau is planning to use a new, multiple mail contact strategy. The Bureau plans to increase the number of its mail contacts with households by sending out a letter notifying households of the coming questionnaire, an initial questionnaire, a thank you or reminder card, and possibly a replacement questionnaire. Both the initial and any possible replacement questionnaires will be barcoded to minimize counting duplicate submissions. In areas lacking city-style addresses, either the Bureau or the Postal Service will implement segments of this strategy. The multiple mail contact strategy was used in the 1995 Test Census and showed a potential for increasing the mail response rate. Multiple mail contact will also be tested during the 1998 dress rehearsal. Language barriers can be an obstacle to gathering a full count of the population. During the 1970 Census, despite the fact that 9.2 million U.S. residents spoke Spanish in their homes, the census questionnaire was not printed in Spanish. Since then, the Bureau has tried to remove that obstacle by printing the questionnaires in both English and Spanish, hiring enumerators with foreign language skills, and providing toll-free telephone assistance in languages other than English. In 1990, census questionnaire guides were available in 32 languages. For 2000, the Bureau is researching the use of questionnaires in additional languages. Certain racial and ethnic minorities have long been undercounted in the census. Language barriers, fears of deportation, and a greater tendency to live in nontraditional households are factors that have led to this undercount. In the 1970s and 1980s, the Bureau established advisory committees on the Hispanic, African-American, Native American, and Asian and Pacific Islander populations to help the Bureau find ways to improve its count of these groups. Those advisory committees will continue to function for the 2000 Census, but the obstacles to increasing minority participation in the census have not been eliminated. There are categories of people who have incentives to avoid participating in the census. Individuals who are in the U.S. without the proper documentation or who otherwise have reason to fear various law enforcement or regulatory government agencies are unlikely to be convinced to be counted. The Bureau plans to use two new sampling procedures in the 2000 Census. One is designed to reduce the time required for and expense of following up on the projected 40 million housing units that may not respond in 2000 to the questionnaires. The other, referred to as Integrated Coverage Measurement (ICM), is designed to adjust the population counts obtained from census questionnaires and nonresponse follow-up procedures to eliminate the endemic differential undercount. As in the previous three censuses, the Bureau plans to encourage households to mail back the questionnaires that have been mailed to them or left at their homes. Four weeks after Census Day, the Bureau plans to implement a procedure known as nonresponse follow-up (NRFU) to collect information from households that have not returned their forms. The 2000 procedure departs from previous censuses in that it incorporates sampling to select the housing units that the Bureau will contact for NRFU. A sample of housing units is to be selected in each census tract, sorted by geography and form type (long versus short form) to make sure that the sample is distributed evenly across nonresponding housing units in each tract. Each sample is to be selected immediately after the cutoff date for mail returns, and households are to be selected in sufficient numbers to ensure that the number of housing units in the sample, when added to the households that have voluntarily returned their forms, will total at least 90 percent of households in the tract. Data for households not in the sample is be imputed by a systematic procedure that relies on data collected from geographically contiguous households. To illustrate, in a tract where 70 percent of the households responded to the census questionnaire, the Bureau would draw a two-thirds sample (to reach 90 percent) of the remaining households. They would then use results of this follow-up enumeration to impute characteristics of the households not selected for the sample. There are to be several exceptions to this procedure. First, all housing units in blocks that have been selected for the Integrated Coverage Measurement survey are to be contacted (100-percent nonresponse follow-up). Second, nonresponse follow-up is not to be conducted in rural households that are listed by enumerators. Data from these households are to be collected by enumerators listing the housing units. Third, late data submitted voluntarily by a household are not to be thrown away and are to be used in preference to data either collected by an enumerator or imputed by NRFU, if the questionnaire is received before the completion of NRFU data collection activities. The purpose of ICM is to adjust for errors that occur in census-taking. (Errors in past censuses ares discussed further on under Accuracy of Past Censuses.) In general, ICM is a statistical procedure that would be used in an effort to improve the accuracy of the original data collected by the census by reconciling that data with data obtained from an independent sample of 750,000 households. The reconciliation process, referred to as Dual System Estimation, applies probability theory to the ICM and the census figures to generate a third, better estimate of the true population. ICM would be conducted after basic data collection, including nonresponse follow-up, had ended and would estimate the extent to which people were correctly counted, missed, or included by error in the census. Since ICM would be the last step in the census process and its results would be an integral part of the final census numbers, the Bureau plans to release only one set of official census numbers. In order to accomplish this “one-number census,” the Bureau is planning for both nonresponse follow-up and ICM to be completed quickly so that it can announce results by December 31, in time to meet the deadline for reporting census data for apportionment purposes. The use of laptop computers during the post-enumeration interviews is planned to speed the process of reconciling differences between the census and ICM data. The cost of ICM, including the laptop computers, is projected to be $325 million. Article I, section 2, of the Constitution refers to an “actual Enumeration” of the population for the census. It also vests Congress with the authority to conduct censuses “in such a Manner as they shall by Law Direct.” Congress, in turn, has delegated this authority to the Department of Commerce through title 13 of the U.S. Code. The question now being debated is whether the latitude allowed the Secretary of Commerce includes the use of the statistical methods proposed for 2000, provided the Secretary determines that using them would improve the census accuracy, or whether the requirement for an “actual Enumeration” limits that discretion. The issue also has a similar statutory incarnation: Title 13 of the U.S. Code states that the Secretary of Commerce is to undertake a decennial census in such form and context as he may determine, including the use of sampling procedures, yet it excludes authority to use sampling in the determination of population for purposes of apportionment of representatives in Congress. The question of whether sampling is statutorily and constitutionally permissible in determining the decennial census count can only be definitively resolved by the Supreme Court. The Supreme Court has not yet considered the specific issue of whether the use of sampling violates the Constitution and, in the course of considering past challenges to the conduct of the census, has specifically stated that its rulings were not to be construed as either prohibiting or allowing the methods. Sampling through the use of the long form questionnaire to obtain demographic information has become an unremarkable part of late twentieth century census-taking in America. However, the possible use of sampling and statistical estimation to adjust the 1990 census population count raised fundamental constitutional and statutory issues that continue to be debated today. The resolution of these issues is now essential to the completion of planning for the 2000 Census. Ever since George Washington questioned the results of the first census in 1791, the accuracy of any given census has been in question. The questions have always been legitimate: The census has never counted 100 percent of those it should, in part because American sensibilities would probably not tolerate more foolproof census-taking methods, such as requiring residents to register with a central governmental authority. In addition, some percentage of the populace has always chosen to evade census-takers out of fear. Others have gone, and will continue to go, uncounted because there is an incongruence between the Bureau’s primary means of locating individuals and particular individuals’ circumstances. For example, in the nineteenth century, isolated homesteaders were difficult for enumerators to locate and count. Today, young urban males are especially likely to be missed in an enumeration process based on associating people at fixed household addresses. Until the 1940s, there were no means to answer the question of how inaccurate a particular census had been, or at least no means less prone to inaccuracy than the census itself. The Bureau began to evaluate census coverage in the 1940s, at first based on comparisons of birth and death certificates and other administrative data—a procedure known as demographic analysis. The Bureau also began to use statistical methods based on sampling, a method that involves using a representative part of a population to convey information about a whole population. Since 1940, the Bureau has quantified the amount by which any census undercounts the population. (See table II.3 for net undercount estimates.) Measures of the total undercount have been possible since 1940 with demographic analysis, but detailed measures of the differences among undercounts of particular ethnic, racial, or other groups have only become available since 1980. The statistics reveal that some subgroups of the population are counted less completely than others. The availability of the data, and the fact that not only representation but also allocation of federal resources is at stake, have made the composition of the undercount a sensitive and widespread concern. In the 1940 census, the Bureau instituted its first effort to gain accurate information through sampling. The Bureau, responding to pressure to add a multitude of questions on unemployment, housing, and income, among others, developed a set of supplementary questions that were asked of only 5 percent of the population. The Bureau statisticians, using newly-developed statistical methods, used those answers to extrapolate to the general population. This statistical method continues in use today, although the percentage of households receiving what is now known as the long form questionnaire is to be 17 percent. The Bureau has evaluated the magnitude and characteristics of census errors and undercounts for decades, but it has never used the findings of these evaluations to actually correct coverage errors. In 1990, a survey called the Post Enumeration Survey (PES) was used to determine the error in the 1990 census. After the census was taken, PES enumerators interviewed a sample of 5,000 census block clusters containing 150,000 households, and matched by name the people counted in the PES with those counted in the census. The extent to which housing units and people were correctly enumerated, missed, or counted in error was used to estimate error for the entire census. Rates of error were then determined for 1,392 various types of people or post strata in the population and applied to every person counted in the census. The post strata were based on such characteristics as age, sex, race, ethnicity, location, and status as renter versus owner of housing. Thus, for example, if Asian and Pacific Islander females between the ages of 18 and 29 were found to be undercounted by 1 percent, an adjusted census would have counted each person in that post stratum as 1.01 persons. Matching had been tried in the post-enumeration effort of 1980, but the computer technology was not sufficiently sophisticated to base an adjustment on the effort. The quality of the data improved in 1990, but the Secretary of Commerce determined that the evidence to support an adjustment was inconclusive and decided not to adjust the 1990 census results. The decision whether to adjust the census with the results of PES was complicated by the fact that the 1990 census figures had already been released when the PES figures became available in the spring of 1991. The Secretary of Commerce expressed concern that having two sets of numbers could create confusion and might allow political considerations to play a part in choosing between sets of numbers when the outcome of the choices, such as differences in apportionment of seats in Congress, can be known in advance of a decision. Title 13 of the U.S. Code prohibits the Bureau and its employees from releasing or allowing anyone other than Department of Commerce employees to examine individual census records. Penalties of up to $5,000 and 5 years in prison for violating the provisions of the title apply. Despite the Bureau’s strict policies, stringent penalties, and its modern record of conscientious defense of the confidentiality of its records against a number of agencies and groups that have sought to obtain certain records, some portion of the population fails to respond to the census, or responds reluctantly, out of fear that their personal information will find its way into the public domain. From 1790 through 1840, the censuses were entirely public. In fact, during this period, the census results by household were posted “in two of the most important places” in the enumeration districts by Congress’ express direction. The purpose of the posting was to allow omissions and errors to be caught by districts’ residents. After the 1840 Census, census results were no longer publicly posted, but there was no law formally safeguarding the confidentiality of the information. Bureau policy, however, as enunciated by the Secretary of the Interior in 1850, was that the returns were to be “exclusively for the use of the government, and not to be used in any way to the gratification of curiosity, the exposure of any man’s business or pursuits, or for the private emolument of the marshals or assistants.” However, because the originals of the census were given to local officials, the security of the returns could not always be ensured. The 1880 Census Act included major changes with regard to privacy. Enumerators were required to swear an oath not to disclose any information to anyone except their supervisors, and census returns were no longer given to local officials but were filed instead with the Department of the Interior. Business information was protected, but information related to individuals was not. That information was available at the discretion of the Director of the Census for a fee. In the early 1900s, the Bureau focused on a different threat to confidentiality, which was the potential that businesses might, by analyzing aggregate pieces of information provided at the local level, deduce the identity of their competitors and information about them. The 1910 Census Act prohibited the Bureau from publishing data from which a business might be identified. The discretion given the Director of the Census to release information related to individuals allowed Civil War veterans to obtain information that helped them prove their age and status for pension purposes at a time when census records might have been the best or only source of official information. World War I era men received information from the Bureau to prove that they were too young to be eligible for the draft. Exercising the same discretion, the Director agreed in 1917 to supply federal officials with the names and ages of individuals potentially eligible for the draft, and in 1921, the Director approved the provision of information to private institutions promoting literacy that wanted to use Bureau records to identify illiterate people in the nation. Later in the 1920s, the Bureau, following the guidance of the Justice Department, began to narrow the circumstances under which information could be released. In 1930, it denied access to a federal agency called the Women’s Bureau, which wanted the names, addresses, and occupations of some women. In 1942, the Bureau turned down the War Department’s request for the names and addresses of people of Japanese descent living in the West—although the Bureau did identify geographic concentrations of Japanese. The new practice of thoroughly restricting access by private or public entities to census records was codified in title 13 of the U.S. Code in 1954. The Supreme Court, citing title 13, ruled in 1982 that the Bureau could not even release its address list without names to the City of New York so that city officials could compare their lists with the Bureau’s. Subsequently, in 1994, Congress passed the Census Address List Improvement Act, which allows the Bureau to share address list information with local governments as part of its decennial address list development procedures. Title 13 assures complete confidentiality for all records in the Bureau’s custody. Once the records are passed to the custody of the National Archives, the Archives can then release them for public use when the records are 72 years old. For records not yet in the Archives, individuals can, for a fee, get a copy of their own record or their minor child’s record, but for anyone else they must have a signed authorization. For a deceased person, a death certificate or similar evidence must be submitted, as well as proof that the applicant is either a direct blood-line descendant or an heir of the person requesting the information. Title 13 provides for strict confidentiality and substantial penalties for deliberate release of individuals’ information. The Privacy Act of 1974 does not apply to census records. The Bureau has long been concerned about inadvertent release of information about individuals via published data that could be analyzed in a way to reveal a particular respondent’s data. The Bureau has procedures to prevent the possible identification of a particular household’s data, especially when it is cross-tabulated with other information. In order to prevent such an accidental release of economic information, in earlier days, the Bureau would visually inspect the data before release and, when necessary, would collapse it into broad categories or delete information from certain cells in tables. Today, the Bureau uses computer programs to ensure that information cannot be analyzed to reveal individuals’ information. Additional techniques, such as random rounding or exchanging household statistics among census blocks, are being studied to avoid potential problems. Nonetheless, in an age when many people feel anxious about the reach of marketers, poll-takers, and others who come armed with computer-based data about individuals, the concern over privacy and confidentiality will be hard to vanquish, and its effect on census-taking will not easily be mitigated. Some part of the declining response rate is a function of people’s anxiety about what will become of the data they provide to the Bureau. The cost of the census has steadily increased over the course of 200 years. The rate of increase has been dramatic in the twentieth century: the 1960 Census cost $523 million, and the 1990 Census cost $2.6 billion—an increase of 400 percent after adjusting for inflation. The rate of increase will continue to escalate with the 2000 Census, which is projected to cost $4 billion. Three major factors are involved in the soaring costs over the last 40 years: an increase in the number of housing units to be enumerated, an increasing use of expensive technology, and an increase in the number of staff needed to take a decennial census. Rapid population growth has been one of the hallmarks of the American national experience. The country’s population has grown over 10 percent on average per decade since the 1960 Census, and that fact has contributed to the ever increasing price tag of the decennial census. In recent decades, however, an additional factor has been important: the rapid rise in the number of housing units. While the number of people in the country has been increasing, there have been fewer people living in the average household. The rate of increase in the number of households, therefore, has been rising at an even quicker pace than the population. In 1960, the Bureau counted people at nearly 60 million housing units; in 1990, it counted people at 102.3 million housing units, a 75-percent increase in the number of units to be either contacted by mail or visited, or both. For 2000, the Bureau estimates there should be 118.6 million housing units that need to be contacted. The Bureau has been a leader in the use of automation technology and electronic data processing methods for nearly a century. The Bureau needed to be inventive because, as the population grew and the census questions and the possible answers to them became more numerous, the decennial census required increasing numbers of clerks to tally and cross-tabulate the responses. To conduct the 1860 Census, the Bureau had 184 office staff and 4,417 field enumerators and produced 3,189 pages of census reports. In 1890, the census effort required just over 3,143 office workers, 46,804 field enumerators, and the pages published numbered 26,408. The census was also taking ever longer to complete as the amount of data collected increased. All the tallying of the 1880 Census was still done by hand, and the Bureau recognized then that the solution to what was becoming a data crisis was mechanization. Herman Hollerith, a former Bureau employee, developed an electrical enumerating machine with its punch cards for the 1890 Census. With 105 of Hollerith’s machines, the 1890 census was completed in 3 years, as opposed to the 7 years it took to complete the 1880 Census. In 1946, the Bureau contracted with a private firm, the Eckert-Mauchly Computer Corporation, to design a machine for its statistical purposes that would use electrical impulses rather than mechanical holes to tabulate census responses. The machine, known as UNIVAC, had a processing unit containing 18,000 vacuum tubes and was delivered in 1951. Although it was too late for processing much of the 1950 Census data, it proved the concept and was a precursor to much greater use of computer processing in subsequent censuses. Since 1950, the Census Bureau has taken advantage of improvements and additional capabilities in electronic data processing developed during the previous decade. While the punch card automation systems of the late nineteenth and early twentieth centuries saved enormous clerical labor costs, they were also a practical necessity: the time needed for completing a census tabulation was approaching 7 years. The benefits of technology in the second half of the twentieth century have produced some savings in labor costs, but the major benefits have been in faster census data processing and improved data analysis and accuracy. For example, the TIGER maps developed for the 1990 Census integrated maps, addresses, and other geographical information, thus solving most problems of inconsistency. The 2000 Census will rely on computer technology to a greater extent than ever before, but most improvements are not primarily aimed at cost reduction. For example: • The census address list and the geographic file will be integrated to assist enumerators in finding housing units. • By providing census data electronically directly on the Internet and through libraries, universities, and the Bureau’s Data Centers, the Bureau intends to make more data available faster to the public than ever before. • The improved data recognition software to be introduced in 2000 will facilitate the processing of enumeration data. • The use of laptop computers pre-loaded with census data for the enumerators to use during the post-enumeration interviews for ICM is planned to speed the process of reconciling differences between the census and ICM data. While there is a cost for implementing these technologies, the benefit is wider, faster distribution, and therefore use, of public census data; greater accuracy and fuller coverage of the census; internal Bureau efficiencies; and, to a lesser degree, reduction of labor costs. Taking a decennial census is a very labor intensive and costly endeavor. Over the decades, as the population has grown, so have the different types and numbers of workers needed to complete and report a census on time. (Table II.3 provides information on the growth of the 21 completed decennial censuses by listing the population, enumerator staff, office and headquarters staff, and the actual cost of each census.) Decennial census staffing can be generally divided into three different categories—field enumeration, local census and census field offices, and headquarters staff. The majority of the field staff are enumerators and supervisors whose primary job is verifying addresses prior to a census, doing nonresponse follow-up during the initial census-taking; and doing post-enumeration surveys. The majority of enumerators work from 6 to 10 weeks and are paid a few dollars over minimum wage. Enumerators do not receive regular federal employees benefits but have been eligible for unemployment compensation during past censuses. Partly because of the high turnover rate of enumerators during prior censuses, the Bureau plans to employ part-time workers who may have another job. The Bureau estimates it will need over 300,000 field staff working out of 520 local census offices and 402 census field offices for the 2000 Census. Local census and census field office staff generally reflect a variety of occupations, such as clerks, lower to mid-level managers, data processors, and data scanning operators. Some of these local census office staff may be employed for up to 11 months. Temporary office staff are generally paid at rates similar to those of full-time federal employees. The Bureau has 4 processing offices and 12 regional offices whose primary mission every 10 years becomes taking the decennial census. The Bureau estimates it should need several thousand employees for its local census, census field, and processing offices in 2000. Headquarters staff consists primarily of managers and analytical staff, such as the Director of the Bureau, high-level managers, lawyers, computer programmers, statisticians, demographers, advertising experts, and writers. The types and amounts of pay received by headquarters staff are as divergent as the many different occupations needed to take and report a census. Several thousand full-time employees from Bureau headquarters are expected to work on the 2000 Decennial Census. Having sufficient staff may allow the Bureau to meet its stated goals of producing an accurate and timely one-number census in 2000. But, doing so also could be costly since it requires the Bureau to undertake many labor-intensive procedures and special activities to ensure that all residents of the United States are counted and included in the 2000 Census. | Pursuant to a congressional request, GAO provided information on historical census issues and reviewed the Census Bureau's plans for the 2000 census. GAO did not evaluate the potential for success, or make recommendations, regarding the 2000 census. GAO noted that: (1) the framers of the Constitution established a requirement for the national government to undertake the census, and described, in general terms, how it should be accomplished; (2) while apportionment is the most widely known use of census data, the data are also used for congressional redistricting, managing federal agencies, and allocating federal funds, and are disseminated to state and local governments, academia, and the private sector as well; (3) for the 2000 Census, the Bureau is planning to rely on a Master Address File, which is to be developed, in part, from the Bureau's 1990 Census address list and the most recent Postal Service address list; (4) the Bureau plans to conduct a 100-percent canvass of all census blocks in early 1999 and will request the Postal Service to validate the city-style addresses prior to the delivery of 2000 Census questionnaires; (5) the Bureau is planning for enumerators to deliver the questionnaires and ask that they be mailed back; (6) the Bureau will focus its efforts to count the homeless in the places where many of them come for services, such as shelters and soup kitchens, as well as targeted outdoor locations; (7) the Bureau has decided to use a paid advertising campaign in 2000 to complement its continuing efforts with its organizational partners; (8) the Bureau is researching the use of questionnaires in additional languages; (9) in order to improve mail response rate, the Bureau is planning to use a new, multiple mail contact strategy; (10) the Bureau plans to use two new sampling procedures in the 2000 census designed to: (a) reduce the time required for and expense of following up on the projected 40 million housing units that may not respond in 2000 to the questionnaires; and (b) adjust the population counts obtained from census questionnaires and nonresponsive follow-up procedures to eliminate the endemic differential undercount; (11) the cost of the census has steadily increased over 200 years, and the rate of increase will continue to escalate with the 2000 Census; (12) the 2000 Census will rely on computer technology to a greater extent than ever before; and (13) several thousand full-time employees from Bureau headquarters are expected to work on the 2000 Decennial Census. |
During Senate Finance Committee oversight hearings held in September 1997, several taxpayers testified about problems they had experienced when dealing with IRS. In response, the then Acting Commissioner of Internal Revenue announced that IRS would hold monthly PSDs in each of its 33 districts, beginning in November 1997. According to the Acting Commissioner, the objective of this initiative was to provide taxpayers with an opportunity to meet face to face with IRS staff to help resolve ongoing tax problems, such as misapplied tax payments, nonreceipt of refunds, and disputed tax bills, that they had been unable to resolve through regular IRS channels. Each IRS district office is responsible for planning and implementing the PSDs, under the overall coordination of the national Taxpayer Advocate. The Office of the Taxpayer Advocate (OTA) administers the Problem Resolution Program (PRP), which was established in 1976 and currently operates in all IRS district offices and service centers to assist taxpayers in resolving tax problems or those who are suffering financial hardship. Other responsibilities of OTA include conducting advocacy projects to identify and address systemic and procedural deficiencies that contribute to the problems experienced by taxpayers and representing taxpayers’ interests in the formulation of IRS policies and procedures. To identify how IRS implemented the PSD initiative, we met with the IRS national Taxpayer Advocate and his staff and obtained and reviewed national office guidance to the district offices concerning planning and implementing the PSDs. We also obtained and reviewed the PSD implementation plans from eight IRS districts and met with district office officials concerning these plans prior to the initial PSD held on November 15, 1997. We then attended the initial PSDs at these eight districts. We also attended the PSDs at two districts during December 1997 and at two districts during May 1998. To determine taxpayers’ overall satisfaction with the initiative and the extent to which taxpayers’ problems were resolved, we obtained and reviewed available IRS statistics concerning the status of PSD cases in general and the specific results of closed PSD cases, as well as summary reports on the results of IRS’ monthly taxpayer surveys and a summary report on the results of IRS’ April and May 1998 taxpayer follow-up telephone survey. We also mailed a questionnaire to a stratified probability sample of the taxpayers who visited the 33 IRS sites on the initial PSD held on November 15, 1997. (App. I describes our sample, response rate, and procedures to assess sources of nonsampling error.) We then analyzed the responses to determine at the time of our survey, among other things, the extent to which (1) taxpayers’ problems were resolved either during or since the November 15 PSD, (2) taxpayers considered the PSD to be a good idea, and (3) taxpayers were aware of the Problem Resolution Office located in each IRS district office. (See app. II for the results of our taxpayer survey.) To determine systemic problems identified, lessons learned, and subsequent actions taken by IRS, we met with the national Taxpayer Advocate and his staff, a district office official who led a study of the overall PSD initiative, and a regional office representative of IRS’ Taxpayer Equity Task Force. We obtained and reviewed pertinent documentation from these officials, including a compilation of lessons learned that was submitted to the National Office by the district offices, a copy of the report prepared at the conclusion of the PSD study, and minutes of meetings held by the Taxpayer Equity Task Force. We also discussed the objectives and status of ongoing task group studies of the four major areas that contributed to taxpayer problems identified during the PSD initiative with representatives from each of these task groups. (See app. III for definitions of these major problem areas.) We did our work from November 1997 to August 1998 in accordance with generally accepted government auditing standards. The work was done at IRS’ National Office and at the following nine district offices: Upstate New York, Delaware/Maryland, Georgia, North Florida, Illinois, Kansas/Missouri, South Texas, Northern California, and Southern California. We selected the IRS offices that we visited on the basis of geographic dispersion and the availability and proximity of our staff to assist in the audit work. We requested comments on a draft of this report from the Commissioner of Internal Revenue. His written comments are discussed at the end of this letter and shown in appendix IV. IRS’ district offices are responsible for holding PSDs with guidance from OTA. IRS held its initial PSD in November 1997 and has held PSD events each month since then. Through the end of July, the PSD initiative had enabled over 22,000 taxpayers to meet with IRS staff in an effort to resolve their ongoing tax problems. OTA is responsible for monitoring and coordinating the overall PSD initiative. OTA provided general guidance to the district offices concerning how PSD events were to be planned, advertised, and implemented, including the necessary staffing, security, and information systems. OTA also selected the specific dates (November 15, 1997, and May 16, 1998) on which national events were held, and district offices decided the dates and locations for additional local problem-solving events each month. Most districts chose to hold PSDs in various locations within the districts to provide taxpayers throughout the geographic area of the districts an opportunity to meet with IRS staff to discuss their problems without traveling to the main district offices. Many districts also elected to hold these events during the week rather than on a Saturday. For example, in March 1998, 22 districts held a PSD on a weekday using extended business hours to accommodate taxpayers who could not visit during regular business hours. The other 11 districts held PSDs on Saturdays. According to IRS officials, the national office and district offices coordinated their efforts to ensure that PSD events were advertised both nationally and locally through newspapers, press releases, television, and radio. Local congressional offices and practitioner groups were also advised of dates and locations for upcoming problem-solving events. Taxpayers and practitioners were advised to call in advance to schedule appointments. Those who called in advance regarding tax problems were given appointments, and their tax accounts were researched to facilitate discussing and resolving their problems on the PSD. In addition, some taxpayers who called IRS regarding a PSD were able to get their problems resolved over the telephone without visiting an IRS office. However, due to the complexity of their tax problems, many taxpayers chose to visit an IRS office to discuss their problems face to face. “Walk-ins” who attended a PSD without an appointment were also generally provided an opportunity to meet with IRS staff to discuss their tax problems. During PSDs, the participating offices we visited were generally staffed with IRS employees from various operating groups, such as Customer Service, Examination, and Collection, who had a wide range of expertise in various tax matters and were available to assist taxpayers, thus making the initiative conducive to discussing and resolving their ongoing tax problems. This arrangement enabled IRS staff, who initially met with taxpayers to discuss their problems and who may not have had the required training or expertise necessary to resolve a particular type of problem, to call upon other IRS staff for assistance. For example, if a taxpayer wanted to discuss a technical issue and the initial IRS employee had not been trained in that area, the employee could ask a specialist to assist the taxpayer. According to the IRS official who led a study of the PSD initiative, this cross-functional approach was particularly helpful in dealing with many of the taxpayers who had multiple problems that had remained unresolved for long periods of time. The official said that IRS staff also considered this approach useful because it helped them to better understand taxpayers’ problems and to develop possible solutions. Each participating office we visited had also arranged for computer terminals, information systems, and technical support as well as office space and security to accommodate as many taxpayers as possible during these events. In some instances, space limitations made it necessary for these events to be held at locations other than an IRS office. IRS’ initial national PSD, which was held at each of the 33 district offices on Saturday, November 15, 1997, was attended by about 6,300 taxpayers and received generally favorable press coverage and reactions from taxpayers. IRS held a second nationwide PSD at each district office on Saturday, May 16, 1998, which was attended by about 2,500 taxpayers. IRS’ district offices have also held additional monthly PSD events between November 1997 and July 1998. More than 22,000 taxpayers had attended PSDs through the end of July 1998. According to OTA, the incremental costs for planning and holding these events, as well as for following up on the taxpayers’ cases that resulted from them, were about $11.5 million through the end of July 1998, primarily resulting from overtime salaries and related personnel compensation. Additional costs, such as rent, and moving and installation of computers, were incurred when PSD events were held at locations other than an IRS office. IRS estimated that the overall costs for holding PSDs during fiscal year 1998 likely would be about $15 million. These estimates do not include the costs to the taxpayers in both the money and time they spent in an effort to get their problems resolved through the PSD initiative, nor do the estimates reflect IRS’ and taxpayers’ costs from previous attempts to resolve their problems. For example, our survey of taxpayers who participated in the initial PSD indicated that all had made prior attempts to resolve their problems, including about 86 percent who had tried over the telephone, about 63 percent who had tried through the mail, and about 42 percent who had tried in person. As these percentages indicate, many taxpayers used more than one method in attempting to resolve their problems. Based on their previous attempts to resolve their problems, about 39 percent of taxpayers responded that they had participated in the PSD because they considered it to be their “last resort.” Surveys that we and IRS conducted of taxpayers who participated in a PSD have shown that taxpayers have had generally favorable reactions concerning the PSD initiative. In particular, the results of our taxpayer survey showed that the vast majority of taxpayers who participated in the first PSD felt that (1) it was easy to schedule an appointment for this event, (2) they were treated courteously by IRS employees, and (3) they appreciated the opportunity to meet face to face with IRS staff to discuss their problems. Overall, about 91 percent of taxpayers believed that the PSD was a good idea. This 91 percent included all taxpayers who felt that their problems had been fully resolved and about 86 percent of those who felt that their problems had not been fully resolved at the time of our survey. Since the beginning of the PSD initiative, IRS has conducted monthly surveys of taxpayers who attended a PSD. The monthly surveys have addressed issues such as promptness of service, convenience of office hours, employee courtesy, and IRS’ effort to resolve taxpayers’ problems. Overall, the results of these monthly surveys have been favorable. However, each month survey respondents indicated that IRS’ effort to resolve their problems could be improved. In addition to its monthly surveys, IRS conducted a follow-up telephone survey in April and May 1998 of taxpayers who had participated in the PSD initiative, either in person or by telephone. The results of the follow-up telephone survey led IRS to revise the format of its monthly surveys in an effort to obtain more detailed information, particularly about whether taxpayers’ problems had been resolved during the PSD and, if not, the reasons why. Although in general the surveys indicated PSDs have been well received by participating taxpayers, many taxpayers’ problems were not resolved through the initiative. Our survey indicated that about 25 percent of participating taxpayers initially felt that their problems had been fully resolved during the November 15th PSD. Some of these taxpayers—about 9 percent—responded that they believed their problems were not resolved after all. However, a greater number of taxpayers who initially felt that their problems had not been resolved during the November 15th PSD—about 18 percent—responded that they believed their problems had since been fully resolved. The net result was that an estimated 34 percent of these taxpayers felt that their problems had been fully resolved at the time of our survey. In addition, about 67 percent of the taxpayers responding to our survey said that they left the PSD knowing what further steps needed to be taken to get their problems resolved. It is important to recognize that certain problems may take longer to resolve than others, and it is possible that some taxpayers who did not have their problems resolved at the time of our survey may have had them resolved since then. Further, some taxpayers may not consider their problems resolved unless IRS makes a change in their favor. According to IRS officials, PSD cases that resulted in no changes would usually be for one of two reasons: (1) IRS determined that there was no basis on which to make a change, such as instances in which taxpayers failed to furnish requested additional information or (2) the tax laws did not allow IRS the flexibility to make a change, such as instances in which the statute of limitations period for sending tax refunds to taxpayers had expired. IRS has conducted various studies related to the PSD initiative. These efforts include studies to identify the main causes, including systemic problems, of some major problem areas raised by taxpayers on PSDs, a Taxpayer Equity Task Force convened by the national Taxpayer Advocate, and an overall review of the PSD initiative to identify lessons learned and the need for continued problem-solving events. IRS’ field offices have also begun on their own to initiate some actions to better resolve taxpayers’ problems. IRS has analyzed the types of problems taxpayers have sought to resolve on PSDs since the beginning of the initiative and identified four main problem areas: penalties, audit reconsiderations, installment agreements, and offers in compromise. IRS currently has task groups reviewing each of these problem areas in an effort to identify possible actions that could reduce such problems in the future. With the exception of the review involving installment agreements, which began in May 1998 and is not scheduled to be completed until fiscal year 1999, each task group was expected to conclude its study with a report including recommendations by the end of September 1998. In addition to these reviews of the four main problem areas, the Taxpayer Advocate has convened a Taxpayer Equity Task Force to assist in identifying both administrative and legislative provisions that may have resulted in unintended consequences for taxpayers and thus may have had an impact on the resolution of their problems. The Taxpayer Equity Task Force has coordinated its efforts with the various task groups that are conducting the four reviews, to avoid duplicating efforts as well as to ensure that its findings are shared with and considered by the task groups. Each of these four main problem areas has been identified by IRS in the past and has been the focus of prior studies. In that regard, the Taxpayer Advocate’s Annual Reports to the Congress, for both fiscal years 1996 and 1997, mention each of these problem areas as a major source of PRP cases and the focus of taxpayer advocacy projects conducted by IRS’ field offices. For example, during fiscal year 1997, one IRS region studied taxpayer complaints concerning installment agreements and offered several recommendations to reduce taxpayer burden and improve taxpayer satisfaction pertaining to this area. At the time of our review, IRS had acted upon one of the 15 recommendations from this project. According to an OTA official, the findings from this project will be used as a starting point for the task group studying installment agreements. In addition to these reviews of specific PSD problem areas, IRS has also conducted an overall review of the PSD initiative to determine the lessons learned over the course of the initiative and the need for continued problem-solving events in the future. According to IRS officials involved in this review, among the lessons learned from PSDs were that many taxpayers who attended did so because they wanted to discuss their ongoing tax problems face to face with IRS staff in an effort to finally get them resolved. In addition, the officials said IRS staff appreciated the opportunity to deal directly with taxpayers concerning their problems. They also thought that the cross-functional, problem-solving approach used on PSDs provided the degree of technical expertise necessary to help many taxpayers with their problems. A report based on the lessons learned that were identified during this review concluded that IRS should focus attention on making the problem-solving process used during PSDs a part of its everyday operations. Recommendations in the report included (1) adopting a policy in each district office whereby taxpayers may make appointments in advance or simply walk in to get their problems resolved, (2) providing access to cross-functional technical resources on demand, (3) expanding and standardizing walk-in hours, (4) establishing a network in each district office to help employees with difficult cases, and (5) continuing monthly PSDs until a day-to-day problem-solving capability has been established. In response to this study, IRS’ Taxpayer Treatment and Service Improvements Executive Steering Committee indicated that monthly PSDs will continue through April 1999, at which time IRS will assess whether there is a continuing need for them. In addition, IRS is studying ways to incorporate lessons learned from the PSDs into its day-to-day operations to better assist taxpayers in resolving their tax problems, by establishing procedures for providing taxpayers with appointments and for providing the necessary technical support. We agree that assisting taxpayers in resolving their tax problems and making such assistance an integral part of IRS’ day-to-day operations could be beneficial to both taxpayers and IRS. In addition to the various studies undertaken, IRS’ field offices have begun taking actions on their own to better assist taxpayers in getting problems resolved. For example, according to IRS regional officials each of the district offices in one region has established cross-functional teams that are available to assist other employees in resolving cases involving difficult tax problems. This approach has been recommended by the national office for nationwide implementation. In addition, according to IRS regional officials some districts have begun to provide taxpayers with appointments to discuss their tax problems, some have provided for walk-in service during normal business hours, and some have established evening hours for conducting audits. Congress recently passed legislation that should also aid taxpayers in getting tax problems resolved. The Internal Revenue Service Restructuring and Reform Act of 1998 (P.L. 105-206) (1) strengthens the role of the national Taxpayer Advocate by expanding the authority to assist taxpayers; (2) replaces the current problem resolution program with local taxpayer advocates reporting directly to the national Taxpayer Advocate; (3) requires the national Taxpayer Advocate to report annually to Congress, at least 20 of the most serious problems encountered by taxpayers and the actions taken by IRS concerning these problems; (4) requires IRS to publish the telephone numbers for each local office of the Taxpayer Advocate; and (5) requires IRS to publish a taxpayer’s right to contact the local Taxpayer Advocate on the statutory notice of deficiency, including the location and telephone number of the appropriate office.These changes, if effectively implemented, should be helpful to taxpayers. Based on our survey, only about 31 percent of taxpayers participating in the November 15th PSD reported that they had prior contact with IRS’ Problem Resolution Office, which, before the PSD initiative, was the main avenue for taxpayers to get assistance in resolving ongoing tax problems. About 63 percent of taxpayers reported that they were unaware this particular office existed. IRS’ PSD initiative has proven to be beneficial to both taxpayers and IRS from several standpoints. For instance, it has given some taxpayers an opportunity to discuss their ongoing tax problems face to face with IRS employees, and it has resulted in some taxpayers reporting that their problems were fully resolved. Although most of the surveyed taxpayers’ problems were not immediately resolved through the initiative, a majority of them reported that they were informed of the steps they needed to take to get their problems resolved. Most of the surveyed taxpayers also reported that they were treated courteously by the IRS employees that they dealt with during the initiative. For their part, IRS officials said that IRS employees welcomed the opportunity to meet directly with taxpayers in an effort to assist them and felt that the cross-functional approach used during the initiative was very beneficial for resolving taxpayers’ problems. The positive benefits of the initiative, however, were gained through costs to both participating taxpayers and IRS. The major problem areas that IRS identified as leading to PSD cases were similar to problems that IRS had previously identified and studied as part of the Problem Resolution Program. IRS’ ongoing studies to identify possible systemic deficiencies causing these problems could result in recommended actions to reduce or eliminate the incidence of such problems in the future. In addition, the lessons learned from the PSD initiative in general should help IRS carry out the Taxpayer Advocate’s responsibilities mandated by the Internal Revenue Service Restructuring and Reform Act of 1998 and improve its day-to-day capability to resolve taxpayers’ ongoing tax problems. Improving this capability could lead to less dependence on monthly PSDs and the added costs associated with these events to both taxpayers and IRS. We requested comments on a draft of this report from the Commissioner of Internal Revenue or his designee. In a September 15, 1998, meeting, the national Taxpayer Advocate and members of his staff provided oral comments in which they agreed with the report’s findings. The Commissioner of Internal Revenue provided us with written comments on October 1, 1998, in which he expressed IRS’ commitment to improve the PSD program to meet the needs of taxpayers. He also said that IRS needs to work to resolve taxpayer problems at the original point of contact with IRS. (See app IV.) As agreed with your office, unless you announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to the Ranking Minority Member, Senate Committee on Finance; the Chairman and Ranking Minority Member, House Committee on Ways and Means; various other congressional committees; the Director of the Office of Management and Budget; the Secretary of the Treasury; the Commissioner of Internal Revenue; and other interested parties. The major contributors to this report are listed in appendix V. If you or your staff have any questions concerning this report, please contact me or Joseph Jozefczyk, Assistant Director, on (202) 512-9110. To obtain participating taxpayers’ views on IRS’ Problem Solving Days (PSD), we mailed questionnaires to a sample of PSD participants in December 1997. The results presented in this report are based on 201 responses to our questionnaire and are presented in detail in appendix II. We drew our sample to represent the population of all taxpayers that visited an IRS office during the November 15, 1997, PSD and were recorded as participants with a full address in an IRS database. To obtain a probability sample of participants, we first drew 600 taxpayer names from a list of all 8,099 taxpayers that IRS had identified in its Problem Resolution Office Management Information System database by December 3, 1997, as having had any type of contact with IRS concerning the November 15, 1997, PSD. We excluded 427 taxpayers with incomplete addresses before forming this list. The sample was randomly drawn from three strata that we defined by the date and closure status of the case. Of the 365 taxpayers that we were able to contact, we found that 243 had actually visited a November 15, 1997, PSD site and thus were eligible for our study. The remainder (122) had not visited IRS during the PSD. Completed questionnaires were obtained from 201 respondents. All results presented in this report have been weighted to estimate the views and experiences of the on-site participants after adjusting for nonresponse rates within the three sample strata. Because we surveyed a sample of on-site participants, our results are estimates of all participants’ characteristics and thus are subject to sampling errors that are associated with samples of this size and type. Our confidence in the precision of the results from this sample is expressed in 95-percent confidence intervals. The 95-percent confidence intervals are expected to include the actual results for 95 percent of the samples of this type. We calculated confidence intervals for our study results using methods that are appropriate for a stratified, probability sample. For the percentages presented in this report, we are 95 percent confident that the results we would have obtained if we had studied the entire study population are within + 8 or fewer percentage points of our results. For example, our estimate that about 91 percent of the participants feel that the PSD was a good idea is surrounded by a 95-percent confidence interval of + 4 percentage points and thus stretches from 87 to 95 percent. In addition to these sampling errors, the practical difficulties in conducting surveys of this type may introduce other types of errors, commonly referred to as nonsampling errors. For example, questions may be misinterpreted or the respondents’ answers may differ from those of people who do not respond. We took several steps in an attempt to reduce such errors. The questionnaire was pretested with eligible taxpayers. All initial sample nonrespondents were sent a follow-up questionnaire mailing. All data were double keyed during entry. Computer analyses were performed to identify inconsistencies or other indications of errors and all computer analyses were checked by a second independent analyst. The low response rate is of special concern for this study. Of the initial 600 sampled taxpayers, 54 percent either returned a complete, usable questionnaire (201) or responded that they were not eligible for the survey because they had not visited an IRS site during the November 15, 1997, PSD (122 sample selections). The difference in this response rate for the three sample strata was small (7 percentage points) and not statistically significant. To help evaluate the low response rate, we conducted a small-scale telephone follow-up survey of nonrespondents and did not find large or statistically significant differences between respondents and nonrespondents. For the telephone follow-up survey, we contacted a subsample of the sample that had not responded to the initial or follow-up mailings. We obtained telephone numbers from IRS or through local directory assistance services. Sixty-one of the 75 selected cases were reached after a minimum of 15 telephone calls had been attempted during morning, afternoon, and evening hours on both weekends and weekdays. The small difference between the eligibility rate of 66 percent for these 61 follow-up taxpayers (21 were not eligible for our sample because they had not visited an IRS office during the PSD), and the eligibility rate of 67 percent for the respondents to the main survey is not statistically significant and does not indicate that the procedures followed in the main survey are overestimating the participation in the PSD. Of the remaining 40 follow-up contacts, 6 refused to participate; and 34 answered the telephone survey follow-up questions, which were compared with the results of the mail survey. The answers provided by these 34 follow-up respondents were not statistically significantly different from those provided by the mail survey respondents for the major questions that were compared. About 88 percent of the telephone follow-up respondents and 91 percent of the mail survey respondents reported that they felt the PSD was a good idea. About 18 percent of the telephone follow-up respondents and 24 percent of the mail survey respondents reported that their problems had been fully resolved on the PSD. The low response for the entire survey means that findings could differ from those that would have been obtained from the full sample. The results from this small-scale follow-up of nonrespondents provide some evidence that the differences are not likely to be large. Following is a summary of responses to the survey we sent to a random sample of participants soon after the initial problem-solving day on November 15, 1997. The results in this appendix have been weighted to account for the initial selection rates and subsequent response rates in each of the three sample strata. For questions for which the respondent was to “check all that apply,” the percent of total survey respondents checking each response is provided and generally exceeds 100 percent in total. For questions for which the respondent was to “check one,” responses are expressed as a percent of the total responses to that question and should equal 100 percent (exceptions may occur through rounding). 1. How did you learn that IRS was planning to hold a problem-solving day on November 15, 1997? (Check all that apply.) 2. What was the primary reason you decided to participate in the IRS problem-solving day? (Check one.) 3. Did you make an appointment with IRS for problem-solving day or did you walk in without an appointment? (Check one.) 4. If you made an appointment, was it easy or difficult to schedule it? (Check one.) 5. What type of tax returns were you discussing with IRS on problem-solving day? (Check all that apply.) 6. What was the nature of the ongoing problems you tried to resolve with IRS on problem-solving day? (Check all that apply.) 7. Prior to participating in problem-solving day, what methods had you used to resolve your problems with IRS? (Check all that apply.) 8. Was your problem with IRS resolved on problem-solving day? (Check one.) 9. Did you leave problem-solving day knowing what further steps needed to be taken to get your problem solved? (Check one.) 10. Do you now have a contact person at IRS to follow up with concerning your problem? (Check one.) 11. If your problem was not resolved or was partially resolved on problem-solving day, has it been fully resolved since then? (Check one.) 12. If you left problem-solving day thinking that your problem was resolved or knowing what steps needed to be taken, has IRS said or done anything since then that leads you to believe that your problem may not be resolved after all? (Check one.) 13. Were you treated courteously by IRS employees during and since problem-solving day? (Check one.) Note 1: Six taxpayers did not respond to this question concerning their treatment by IRS during the problem-solving day. Note 2: Fifteen taxpayers did not respond to this question concerning their treatment by IRS since the problem-solving day. 14. Based on your experience, do you think that IRS’ problem-solving day was a good idea? (Check one.) 15. Each IRS district office has an office called the “Problem Resolution Office,” which was established to help taxpayers resolve their tax problems. This office is headed by a “Taxpayer Advocate.” Have you ever contacted a Problem Resolution Office in order to resolve tax problems? (Check one.) IRS has analyzed the types of problems that taxpayers have sought to resolve on problem-solving days since the beginning of the initiative and identified four main problem areas, including (1) penalties, (2) audit reconsiderations, (3) installment agreements, and (4) offers in compromise. Following are definitions for each area. The Internal Revenue Code contains various provisions authorizing IRS to impose financial penalties on a taxpayer for violation of provisions in the code. For example, section 6651 of the code authorizes IRS to assess a penalty if a taxpayer fails to file a required tax return or fails to pay a tax liability on time. IRS assesses the penalty in addition to the taxes and interest owed by the taxpayer. Treasury Regulation 301.6404-1 authorizes IRS to reconsider an audit assessment. For example, if a taxpayer disputes an assessment and provides additional information to support his or her position, IRS may reconsider and abate the assessment. Section 6159 of the Internal Revenue Code authorizes IRS to allow taxpayers to pay their taxes in installments, with interest, in order to facilitate payment of the tax liability. Section 7122 of the Internal Revenue Code authorizes IRS to compromise tax debts. Offers in compromise are taxpayer proposals to settle tax debts for less than the amount owed. Susan Malone, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the effectiveness of Internal Revenue Service's (IRS) problem-solving days (PSD), focusing on: (1) how the PSDs were organized and advertised and what IRS did to make them conducive to discussing and resolving taxpayers' ongoing tax problems; (2) taxpayers' overall satisfaction with the initiative and the extent to which taxpayers' problems were resolved; and (3) whether IRS identified any systemic problems or lessons learned and took subsequent actions on them. GAO noted that: (1) IRS began monthly PSDs in November 1997 to assist taxpayers in getting their tax problems resolved; (2) to advertise the initiative, IRS used various means, including national and local newspapers, television, and radio; (3) taxpayers and practitioners were advised to call in advance to schedule appointments to discuss their tax problems with IRS staff; (4) some taxpayers who called in advance were able to get their problems resolved over the telephone; (5) for taxpayers who scheduled an appointment in advance, IRS was generally able to have information about the taxpayers' case available at the time of the appointment; (6) taxpayers who walked in without an appointment were generally afforded an opportunity to meet with IRS staff to discuss their tax problems; (7) during PSDs each participating IRS office was staffed with employees from various functional groups to provide a range of expertise and thus make the initiative conducive to discussing and resolving taxpayers' tax problems; (8) IRS' initial national PSD was held at each of its 33 district offices on November 15, 1997, and about 6,300 taxpayers attended; (9) a subsequent national PSD, held on May 16, 1998, was attended by about 2,500 taxpayers; (10) between November 1997 and July 1998, these events attracted more than 22,000 taxpayers; (11) IRS estimated that it incurred incremental costs of about $11.5 million through the end of July 1998; (12) GAO's survey of taxpayers attending the first PSD indicated that about 91 percent believed it was a good idea, even though only about 34 percent of taxpayers reported that their problems had been fully resolved by the time they responded to the questionnaire; (13) IRS surveys of taxpayers attending PSDs each month and a follow-up telephone survey conducted by IRS in April and May 1998 indicated a generally positive response to the initiative, noting that some indicated that IRS' effort to resolve problems could be improved; (14) IRS has identified four types of problems that taxpayers have sought to resolve on PSDs and have assembled task groups to review each: penalties, audit reconsiderations, installment agreements, and offers in compromise; (15) according to IRS officials who have studied the PSD initiative, an important lesson learned was that taxpayers with ongoing tax problems wanted to discuss them face to face with IRS staff to finally get their problems resolved; and (16) IRS is also studying ways to incorporate problem-solving lessons learned from the PSD initiative into its day-to-day operations. |
Driving is a complex task that depends on visual, cognitive, and physical functions that enable a person to see traffic and road conditions; recognize what is seen, process the information, and decide how to physically act to control the vehicle. Although the aging process affects people at different rates and in different ways, functional declines associated with aging can affect driving ability. For example, vision declines may reduce the ability to see other vehicles, traffic signals, signs, lane markings, and pedestrians; cognitive declines may reduce the ability to recognize traffic conditions, remember destinations, and make appropriate decisions in operating the vehicle; and physical declines may reduce the ability to perform movements required to control the vehicle. A particular concern is older drivers with dementia, often as a result of illnesses such as Alzheimer’s disease. Dementia impairs cognitive and sensory functions causing disorientation, potentially leading to dangerous driving practices. Age is the most significant risk factor for developing dementia—approximately 12 percent of those aged 65 to 84 are likely to develop the condition while over 47 percent of those aged 85 and older are likely to be afflicted. For drivers with the condition, the risk of being involved in a crash is two to eight times greater than for those with no cognitive impairment. However, some drivers with dementia, particularly in the early stages, may still be capable of driving safely. Older drivers experience fewer fatal crashes per licensed driver compared with drivers in younger age groups; however, on the basis of miles driven, older drivers have a comparatively higher involvement in fatal crashes. Over the past decade, the rate of older driver involvement in fatal crashes, measured on the basis of licensed drivers, has decreased and, overall, older drivers have a lower rate of fatal crashes than drivers in younger age groups (see fig. 1). Older drivers’ fatal crash rate per licensed driver is lower than corresponding rates for drivers in younger age groups, in part, because older drivers drive fewer miles per year than younger drivers, may hold licenses even though they no longer drive, and may avoid driving during times and under conditions when crashes tend to occur, such as during rush hour or at night. However, on the basis of miles traveled, older drivers who are involved in a crash are more likely to suffer fatal injuries than are drivers in younger age groups who are involved in crashes. As shown in figure 2, drivers aged 65 to 74 are more likely to be involved in a fatal crash than all but the youngest drivers (aged 16 to 24), and drivers aged 75 and older are more likely than drivers in all other age groups to be involved in a fatal crash. Older drivers will be increasingly exposed to crash risks because older adults are the fastest-growing segment of the U.S. population, and future generations of older drivers are expected to drive more miles per year and at older ages compared with the current older-driver cohort. The U.S. Census Bureau projects that the population of adults aged 65 and older will more than double, from 35.1 million people (12.4 percent of total population) in 2000 to 86.7 million people (20.7 percent of total population) in 2050 (see fig. 3). Intersections pose a particular safety problem for older drivers. Navigating through intersections requires the ability to make rapid decisions, react quickly, and accurately judge speed and distance. As these abilities can diminish through aging, older drivers have more difficulties at intersections and are more likely to be involved in a fatal crash at these locations. Research shows that 37 percent of traffic-related fatalities involving drivers aged 65 and older occur at intersections compared with 18 percent for drivers aged 26 to 64. Figure 4 illustrates how fatalities at intersections represent an increasing proportion of all traffic fatalities as drivers age. DOT—through FHWA and NHTSA—has a role in promoting older driver safety, although states are directly responsible for operating their roadways and establishing driver licensing requirements. FHWA focuses on roadway engineering and has established guidelines for designers to use in developing engineering enhancements to roadways to accommodate the declining functional capabilities of older drivers. NHTSA focuses on reducing traffic-related injuries and fatalities among older people by promoting, in conjunction with nongovernmental organizations, research, education, and programs aimed at identifying older drivers with functional limitations that impair driving performance. NHTSA has developed several guides, brochures, and booklets for use by the medical community, law enforcement officials, older drivers’ family members, and older drivers themselves that provide guidance on what actions can be taken to improve older drivers’ capabilities or to compensate for lost capabilities. Additionally, NIA supports research related to older driver safety through administering grants designed to examine, among other issues, how impairments in sensory and cognitive functions impact driving ability. These federal initiatives support state efforts to make roads safer for older drivers and establish assessment practices to evaluate the fitness of older drivers. The Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU), signed into law in August 2005, establishes a framework for federal investment in transportation and has specific provisions for older driver safety. SAFETEA-LU authorizes $193.1 billion in Federal-Aid Highway Program funds to be distributed through FHWA for states to implement road preservation, improvement, and construction projects, some of which may include improvements for older drivers. SAFETEA-LU also directs DOT to carry out a program to improve traffic signs and pavement markings to accommodate older drivers. To fulfill these requirements, FHWA has updated or plans to update its guidebooks on highway design for older drivers, plans to conduct workshops on designing roads for older drivers that will be available to state practitioners, and has added a senior mobility series to its bimonthly magazine that highlights advances and innovations in highway/traffic research and technology. Additionally, SAFTEA-LU authorizes NHTSA to spend $1.7 million per year (during fiscal years 2006 through 2009) in establishing a comprehensive research and demonstration program to improve traffic safety for older drivers. FHWA has recommended practices for designing and operating roadways to make them safer for older drivers and administers SAFETEA-LU funds that states—which own and operate most roadways under state or local government authority—may use for road maintenance or construction projects to improve roads for older drivers. To varying degrees, states are implementing FHWA’s older driver practices and developing plans and programs that consider older drivers’ needs. However, responses to our survey indicated that other safety issues—such as railway and highway intersections and roadside hazard elimination—are of greater concern to states, and states generally place a higher priority on projects that address these issues rather than projects targeted only towards older drivers. FHWA has issued guidelines and recommendations to states on practices that are intended to make roads safer for older drivers, such as the Highway Design Handbook for Older Drivers and Pedestrians. The practices emphasize cost-effective construction and maintenance measures involving both the physical layout of the roadway and use of traffic control devices such as signs, pavement markings, and traffic signals. The practices are specifically designed to improve conditions at sites—intersections, interchanges, curved roads, construction work zones, and railroad crossings—known to be unsafe for older drivers. While these practices are designed to address older drivers’ needs, implementation of these practices can make roads safer for all drivers. Intersections—Recognizing that intersections are particularly problematic for older drivers, FHWA’s top priority in its Highway Design Handbook for Older Drivers and Pedestrians is intersection improvements. Practices to improve older drivers’ ability to navigate intersections include using bigger signs with larger lettering to identify street names, consistent placement of lane use signs and arrow pavement markings, aligning lanes to improve drivers’ ability to see oncoming traffic, and using reflective markers on medians and island curbs at intersections to make them easier to see at night. See figures 5 through 8 for these and additional intersection improvement practices. Interchanges—Practices to aid older drivers at interchanges include using signs and pavement markings to better identify right and wrong directions of travel and configuring on-ramps to provide a longer distance for accelerating and merging into traffic. See figure 9 for these and additional interchange improvement practices. Road curves—Practices to assist older drivers on curves include using signs and reflective markers—especially on tight curves—to clearly delineate the path of the road. See figure 10 for these and additional curve improvement practices. Construction work zones—Practices to improve older driver safety in construction work zones include increasing the length of time messages are visible on changeable message signs; providing easily discernable barriers between opposing traffic lanes in crossovers; using properly sized devices (cones and drums) to delineate temporary lanes; and installing temporary reflective pavement markers to make lanes easier to navigate at night. Railroad crossings—Practices to help older drivers are aimed at making the railroad crossing more conspicuous by using reflective materials on the front and back of railroad crossing signs and delineating the approach to the crossing with reflective posts. See figure 11 for these and additional railroad crossing improvement practices. FHWA is continuing to research and develop practices to make roads safer for older drivers. FHWA also promotes the implementation of these practices by sponsoring studies and demonstration projects, updating its Highway Design Handbook for Older Drivers and Pedestrians, and training state and local transportation officials. For example, FHWA is supporting a research study—to be conducted over the next 3 to 5 years— on the effectiveness of selected low-cost road improvements in reducing the number and severity of crashes for all drivers. With the findings of this and other studies, FHWA plans to update its guidelines to refine existing or recommend new practices in improving older driver safety. In addition, FHWA is considering changes to its MUTCD—to be published in 2009—that will enhance older driver safety by updating standards related to sign legibility and traffic signal visibility. Under SAFETEA-LU, FHWA provides funding that states may use to implement highway maintenance or construction projects that can enhance older driver safety. However, because projects to enhance older driver safety can be developed under several different SAFETEA-LU programs, it is difficult to determine the amount of federal funding dedicated to highway improvements for older drivers. While older driver safety is generally not the primary focus of projects funded through SAFETEA-LU programs, improvements made to roads may incorporate elements of FHWA’s older driver safety practices. For example, under SAFETEA-LU’s Highway Safety Improvement Program (HSIP), states submit a Strategic Highway Safety Plan (SHSP) after reviewing crash and other data and determining what areas need to be emphasized when making safety improvements. If older driver safety is found to be an area of emphasis, a state may develop projects to be funded under the HSIP that provide, for example, improved traffic signs, pavement markings, and road layouts consistent with practices listed in FHWA’s Highway Design Handbook for Older Drivers and Pedestrians. State DOTs have, to varying degrees, incorporated FHWA’s older driver safety practices into their design standards; implemented the practices in construction, operations, and maintenance activities; trained technical staff in applying the practices; and coordinated with local agencies to promote the use of the practices. The states’ responses to our survey indicate the range in states’ efforts. Design standards. Nearly half of the states have incorporated about half or more of FHWA’s practices into their design standards, as follows: 24 state DOTs reported including about half, most, almost all, or all of the recommendations. 20 reported including some of the recommendations. 6 reported including few or none of the recommendations. Construction, operations, and maintenance activities. Even though most state DOTs have not incorporated all FHWA practices into their design standards, the majority of states have implemented some FHWA practices in construction, operations, and maintenance activities, particularly in the areas of intersections and work zones (see table 1). Training. Nearly one-fourth of state DOTs have provided training on FHWA practices to half or more of their technical staff, as follows: 12 state DOTs reported having trained about half, most, almost all, or all of their technical staff. 32 have trained some of their technical staff. 7 have trained few or none of their technical staff. Coordination with local agencies. Because state transportation agencies do not own local roads—which may account for the majority of roads in a state—coordination with local governments is important in promoting older driver safety in the design, operation, and maintenance of local roads. The states reported using a variety of methods in their work with local governments to improve older driver safety (see table 2). States also varied in their efforts to consult stakeholders on older driver issues in developing highway safety plans (defined in the state SHSP) and lists of projects in their Statewide Transportation Improvement Programs (STIP). According to our survey, 27 of the 51 state DOTs have established older driver safety as a component of their SHSPs, and our survey indicated that, in developing their SHSPs, these states were more likely to consult with stakeholders concerned about older driver safety than were states that did not include an older driver component in their plans. Obtaining input from stakeholders concerned about older driver safety—from both governmental and nongovernmental organizations—is important because they can contribute additional information, and can sometimes provide resources, to address older driver safety issues. For example, elderly mobility was identified by the Michigan State Safety Commission to be an emerging issue and, in February 1998, funded the Southeast Michigan Council of Governments (SEMCOG) to convene a statewide, interdisciplinary Elderly Mobility and Safety Task Force. SEMCOG coordinated with various stakeholder groups—Michigan DOT, Michigan Department of State, Michigan Office of Highway Safety Planning, Michigan Department of Community Health, Office of Services to the Aging, University of Michigan Transportation Research Institute, agencies on aging, and AAA Michigan among others—in developing a statewide plan to address older driver safety and mobility issues. This plan—which outlines recommendations in the areas of traffic engineering, alternative transportation, housing and land use, health and medicine, licensing, and education and awareness—forms the basis for the strategy defined in Michigan’s SHSP to address older drivers’ mobility and safety. Even though 27 state DOTs have reported establishing older driver safety as a component of their SHSPs, only 4 state DOTs reported including older driver safety improvement projects in their fiscal year 2007 STIPs. However, state STIPs may contain projects that will benefit older drivers. For example, 49 state DOTs reported including funding for intersection improvements in their STIPs. Because drivers are increasingly more likely to be involved in an intersection crash as they age, older drivers, in particular, should benefit from states’ investments in intersection safety projects, which generally provide improved signage, traffic signals, turning lanes, and other features consistent with FHWA’s older driver safety practices. Although older driver safety could become a more pressing need in the future as the population of older drivers increases, states are applying their resources to areas that pose greater safety concerns. In response to a question in our survey about the extent to which resources—defined to include staff hours and funds spent on research, professional services, and construction contracts—were invested in different types of safety projects, many state DOTs indicated that they apply resources to a great or very great extent to safety projects other than those concerning older driver safety (see table 3). Survey responses indicated that resource constraints are a significant contributing factor to limiting states’ implementation of FHWA’s older driver safety practices and development of strategic plans and programs that consider older driver concerns. More than half of state licensing agencies have implemented assessment practices to support licensing requirements for older drivers that are more stringent than requirements for younger drivers. These requirements— established under state licensing procedures—generally involve more frequent renewals (16 states), mandatory vision screening (10 states), in- person renewals (5 states) and mandatory road tests (2 states). However, assessment of driver fitness in all states is not comprehensive because cognitive and physical functions are generally not evaluated to the same extent as visual function. Furthermore, the effectiveness of assessment practices used by states is largely unknown. Recognizing the need for better assessment tools, NHTSA is developing more comprehensive practices to assess driver fitness and intends to provide technical assistance to states in implementing these practices. Over half of the states have procedures that establish licensing requirements for older drivers that are more stringent than requirements for younger drivers. These requirements generally include more frequent license renewal, mandatory vision screening, in-person renewals, and mandatory road tests. In addition, states may also consider input from medical advisory boards, physician reports, and third-party referrals in assessing driver fitness and making licensing decisions. (See fig. 12 and app. II for additional details.) Accelerated renewal—Sixteen states have accelerated renewal cycles for older drivers that require drivers older than a specific age to renew their licenses more frequently. Colorado, for example, normally requires drivers to renew their licenses every 10 years, but drivers aged 61 and older must renew their licenses every 5 years. Vision screening—Ten states require older drivers to undergo vision assessments, conducted by either the Department of Motor Vehicles or their doctor, as part of the license renewal process. These assessments generally test for visual acuity or sharpness of vision. For example, the average age for mandatory vision screening is 62, with some states beginning this screening as early as age 40 (Maine and Maryland) and other states beginning as late as age 80 (Florida and Virginia). In-person renewal—Five states—Alaska, Arizona, California, Colorado, and Louisiana—that otherwise allow license renewal by mail require older drivers to renew their licenses in person. Arizona, California, and Louisiana do not permit mail renewal for drivers aged 70 and older. Alaska does not allow mail renewal for drivers aged 69 and older, while Colorado requires in-person renewal for those over age 61. Road test—Two states, New Hampshire and Illinois, require older drivers to pass road examinations upon reaching 75 years and at all subsequent renewals. In addition, states have adopted other practices to assist licensing agencies in assessing driver fitness and identifying older drivers whose driving fitness may need to be reevaluated. Medical Advisory Boards—Thirty-five states and the District of Columbia rely on Medical Advisory Boards (MAB) to assist licensing agencies in evaluating people with medical conditions or functional limitations that may affect their ability to drive. A MAB may be organizationally placed within a state’s transportation, public safety, or motor vehicle department. Board members—practicing physicians or health care professionals—are typically nominated or appointed by the state medical association, motor vehicle administrator, or governor’s office. Some MABs review individual cases typically compiled by case workers who collect and review medical and other evidence such as accident reports that is used to make a determination about a person’s fitness to drive. The volume of cases reviewed by MABs varies greatly across states. For example, seven state MABs review more than 1,000 cases annually, while another seven MABs review fewer than 10 cases annually. Physician reports—While all states accept reports of potentially unsafe drivers from physicians, nine states require physicians to report physical conditions that might impair driving skills. For example, California specifically requires doctors to report a diagnosis of Alzheimer’s disease or related disorders, including dementia, while Delaware, New Jersey, and Nevada require physicians to report cases of epilepsy and those involving a person’s loss of consciousness. However, not all states assure physicians that such reports will be kept confidential, so physicians may choose not to report patients if they fear retribution in the form of a lawsuit or loss of the patient’s business. Third-party referrals—In addition to reports from physicians, all states accept third-party referrals of concerns about drivers of any age. Upon receipt of the referral, the licensing agency may choose to contact the driver in question to assess the person’s fitness to drive. A recent survey of state licensing agencies found that nearly three-fourths of all referrals came from law enforcement officials (37 percent) and physicians or other medical professionals (35 percent). About 13 percent of all referrals came from drivers’ families or friends, and 15 percent came from crash and violation record checks, courts, self-reports, and other sources. However, the assessment practices that state licensing agencies use to evaluate driver fitness are not comprehensive. For example, our review of state assessment practices indicates that all states screen for vision, but we did not find a state with screening tools to evaluate physical and cognitive functions. Furthermore, the validity of assessment practices used by states is largely unknown. While research indicates that in-person license renewal is associated with lower crash rates—particularly for those aged 85 and older—other assessment practices, such as vision screening, road tests, and more frequent license renewal cycles, are not always associated with lower older driver fatality rates. According to NHTSA, there is insufficient evidence on the validity and reliability of any driving assessment or screening tool. Thus, states may have difficulty discerning which tools to implement. NHTSA, supported by the NIA and by partner nongovernmental organizations, has promoted research and development of mechanisms to assist licensing agencies and other stakeholders—medical providers, law enforcement officers, social service providers, family members—in better identifying medically at-risk individuals; assessing their driving fitness through a comprehensive evaluation of visual, physical, and cognitive functions; and enabling their driving for as long as safely possible. In the case of older drivers, NHTSA recognizes that only a fraction of older drivers are at increased risk of being involved in an accident and focuses its efforts on providing appropriate research-based materials and information to the broad range of stakeholders who can identify and influence the behavior of at-risk drivers. Initiatives undertaken by NHTSA and its partner organizations include: Model Driver Screening and Evaluation Program. Initially developed by NHTSA in partnership with AAMVA and supported with researchers funded by NIA—the program provides a framework for driver referral, screening assessment, counseling, and licensing actions. The guidance is based on research that relates an individual’s functional abilities to driving performance and reflects the results of a comprehensive research project carried out in cooperation with the Maryland Motor Vehicle Administration. Recent research supported under this program and with NIA grants evaluated a range of screenings related to visual, physical, and cognitive functions that could be completed at a licensing agency and may effectively identify drivers at an increased risk of being involved in a crash. Physician’s Guide to Assessing and Counseling Older Drivers. Developed by the American Medical Association to raise awareness among physicians, the guide cites relevant literature and expert views (as of May 2003) to assist physicians in judging patients’ fitness to drive. The guide is based on NHTSA’s earlier work with the Association for the Advancement of Automotive Medicine. This work—a detailed literature review—summarized knowledge about various categories of medical conditions, their prevalence, and their potential impact on driving ability. Countermeasures That Work: A Highway Safety Countermeasure Guide for State Highway Safety Offices. Developed with the Governors Highway Safety Association, this publication describes current initiatives in the areas of communications and outreach, licensing, and law enforcement—and the associated effectiveness, use, cost, and time required for implementation—that state agencies might consider for improving older driver safety. NHTSA Web site. NHTSA maintains an older driver Web site with content for drivers, caregivers, licensing administrators, and other stakeholders to help older drivers remain safe. NIA research. NIA is supporting research on several fronts in studying risk factors for older drivers and in developing new tools for driver training and driver fitness assessment. A computer-based training tool is being developed to help older drivers improve the speed with which they process visual information. This tool is a self-administered interactive variation of validated training techniques that have been shown to improve visual processing speed. The tool is being designed as a cost-effective mechanism that can be broadly implemented, at social service organizations, for example, and made accessible to older drivers. Driving simulators are being studied as a means of testing driving ability and retraining drivers in a manner that is more reliable and consistent than on-road testing. Virtual reality driving simulation is a potentially viable means of testing that could more accurately identify cognitive and motor impairments than could on-road tests that are comparatively less safe and more subjective. Research is ongoing to evaluate the impacts of hearing loss on cognitive functions in situations, such as driving, that require multitasking. Results of the research may provide insights into what level of auditory processing is needed for safe driving and may lead to development of future auditory screening tools. Studies that combine a battery of cognitive function and road/driving simulator tests are being conducted to learn how age-related changes lead to hazardous driving. Results of these studies may prove useful in developing screening tests to identify functionally-impaired drivers—particularly those with dementia—who are at risk of being involved in a crash and may be unfit to drive. NHTSA is also developing guidelines to assist states in implementing assessment practices. To date, NHTSA’s research and model programs have had limited impact on state licensing practices. For example, according to NHTSA, no state has implemented the guidelines outlined in its Model Driver Screening and Evaluation Program. Furthermore, there is insufficient evidence on the validity and reliability of driving assessments, so states may have difficulty discerning which assessments to implement. To assist states in implementing assessment practices, NHTSA, as authorized under SAFETEA-LU section 2017, developed a plan to, among other things, (1) provide information and guidelines to people (medical providers, licensing personnel, law enforcement officers) who can influence older drivers and (2) improve the scientific basis for licensing decisions. In its plan NHTSA notes that the most important work on older driver safety that needs to occur in the next 5 years is refining screening and assessment tools and getting them into the hands of the users who need them. As an element of its plan, NHTSA is cooperating with AAMVA to create a Medical Review Task Force that will identify areas where standards of practice to assess the driving of at-risk individuals are possible and develop strategies for implementing guidelines that states can use in choosing which practices to adopt. The task force will—in areas such as vision and cognition—define existing practices used by states and identify gaps in research to encourage consensus on standards. NHTSA officials said that work is currently under way to develop neurological guidelines— which will cover issues related to cognitive assessments—and anticipate that the task force will report its findings in 2008. Of the six states we visited, five—California, Florida, Iowa, Maryland, and Michigan— have active multidisciplinary coordination groups that may include government, medical, academic, and social service representatives, among others, to develop strategies and implement efforts to improve older driver safety. Each of these states identified its coordination group as a key initiative in improving older driver safety. As shown in table 4, the coordinating groups originated in different ways and vary in size and structure. For example, Florida’s At-Risk Driver Council was formally established under state legislation while Maryland’s group functions on an ad hoc basis with no statutory authority. The approaches taken by these groups in addressing older driver safety issues vary as well. For example, California’s large task force broadly reaches several state agencies and partner organizations, and the task force leaders oversee the activity of eight work groups in implementing multiple action items to improve older driver safety. In contrast, Iowa’s Older Driver Target Area Team is a smaller group that operates through informal partnerships among member agencies and is currently providing consulting services to the Iowa Department of Transportation on the implementation of older driver strategies identified in Iowa’s Comprehensive Highway Safety Plan. Members of the coordination groups we spoke with said that their state could benefit from information about other states’ practices. For example, coordinating group members told us that sharing information about leading road design and licensing practices, legislative initiatives, research efforts, and model training programs that affect older drivers could support decisions about whether to implement new practices. Furthermore, group members said that identifying the research basis for practices could help them assess the benefits to be derived from implementing a particular practice. While some mechanisms exist to facilitate information exchanges on some topics, such as driver fitness assessment and licensing through AAMVA’s Web site, there is no mechanism for states to share information on the broad range of efforts related to older driver safety. In addition to coordinating groups, the six states have ongoing efforts to improve older driver safety in the areas of strategic planning, education and awareness, licensing and driver fitness assessment, engineering, and data analysis. The following examples highlight specific initiatives and leading practices in each of these categories. Strategic planning—Planning documents establish recommended actions and provide guidance to stakeholders on ways to improve older driver safety. The Michigan Senior Mobility Action Plan, issued in November 2006, builds upon the state’s 1999 plan (Elderly Mobility & Safety—The Michigan Approach) and outlines additional strategies, discusses accomplishments, and sets action plans in the areas of planning, research, education and awareness, engineering countermeasures, alternative transportation, housing and land use, and licensing designed to (1) reduce the number and severity of crashes involving older drivers and pedestrians, (2) increase the scope and effectiveness of alternative transportation options available to older people, (3) assist older people in maintaining mobility safely for as long as possible, and (4) plan for a day when driving may no longer be possible. In implementing this plan, officials are exploring the development of a community-based resource center that seniors can use to find information on mobility at a local level. Traffic Safety among Older Adults: Recommendations for California—developed through a grant from California’s Office of Traffic Safety and published in August 2002—offers a comprehensive set of recommendations and provides guidance to help agencies and communities reduce traffic-related injuries and fatalities to older adults. The Older Californian Traffic Safety Task Force was subsequently established to coordinate the implementation of the report’s recommendations. Education/awareness—Education and public awareness initiatives enable outreach to stakeholders interested in promoting older driver safety. Florida GrandDriver®—based on a program developed by AAMVA— takes a multifaceted approach to public outreach through actions such as providing Web-based information related to driver safety courses and alternative transportation; training medical, social service and transportation professionals; offering safety talks at senior centers; and sponsoring CarFit events. According to the Florida Department of Highway Safety and Motor Vehicles, a total of 75 training programs and outreach events were conducted under the GrandDriver program between 2000 and 2006. California—through its Older Californian Traffic Safety Task Force— annually holds a “Senior Safe Mobility Summit” that brings subject- matter experts and recognized leaders together to discuss issues and heighten public understanding of long-term commitments needed to help older adults drive safely longer. Assessment/licensing—Assessment and licensing initiatives are concerned with developing better means for stakeholders—license administrators, medical professionals, law enforcement officers, family members—to determine driver fitness and provide remedial assistance to help older people remain safe while driving. California’s Department of Motor Vehicles is continuing to develop a progressive “three-tier” system for determining drivers’ wellness— through nondriving assessments in the first two tiers—and estimating driving fitness in a third-tier road test designed to assess the driver’s ability to compensate for driving-relevant functional limitations identified in the first two tiers. The system, currently being tested at limited locations, is being developed to keep people driving safely for as long as possible by providing a basis for a conditional licensing program that can aid drivers in improving their driving-relevant functioning and in adequately compensating for their limitations. Oregon requires physicians and other designated medical providers to report drivers with severe and uncontrollable cognitive or functional impairments that affect the person’s ability to drive safely. Oregon Driver and Motor Vehicle Services (ODMVS) evaluates each report and determines if immediate suspension of driving privileges is necessary. A person whose driving privileges have been suspended needs to obtain medical clearance and pass ODMVS vision, knowledge, and road tests in order to have his or her driving privileges reinstated. In cases where driving privileges are not immediately suspended, people will normally be given between 30 and 60 days to pass ODMVS tests or provide medical evidence indicating that the reported condition does not present a risk to their safe driving. Maryland was the first state to establish a Medical Advisory Board (MAB)—created by state legislation in 1947—which is currently one of the most active boards in the United States. Maryland’s MAB manages approximately 6000 cases per year—most involving older drivers. Drivers are referred from a number of sources—including physicians, law enforcement officers, friends, and relatives—and the MAB reviews screening results, physician reports, and driving records among other information to determine driving fitness. The MAB’s opinion is then considered by Maryland’s Motor Vehicle Administration in making licensing decisions. The Iowa Department of Motor Vehicles can issue older drivers restricted licenses that limit driving to daylight hours, specific geographic areas, or low-speed roads. Restricted licensing, also referred to as “graduated de-licensing,” seeks to preserve the driver’s mobility while protecting the health of the driver, passengers, and others on the road by limiting driving to low risk situations. About 9,000 older drivers in Iowa have restricted licenses. Iowa license examiners may travel to test older drivers in their home towns, where they feel most comfortable driving. Engineering—Road design elements such as those recommended by FHWA are implemented to provide a driving environment that accommodates older drivers’ needs. A demonstration program in Michigan, funded through state, county, and local government agencies, along with AAA Michigan, made low- cost improvements at over 300 high-risk, urban, signalized intersections in the Detroit area. An evaluation of 30 of these intersections indicated that the injury rate for older drivers was reduced by more than twice as much as for drivers aged 25 to 64 years. The next phase of the program is development of a municipal tool kit for intersection safety, for use by municipal leaders and planners, to provide a template for implementing needed changes within their jurisdictions. The Iowa Department of Transportation (IDOT) has undertaken several initiatives in road operations, maintenance, and new construction to enhance the driving environment for older drivers. Among its several initiatives, IDOT is using more durable pavement markings on selected roads and servicing all pavement markings on a performance-based schedule to maintain their brightness, adding paved shoulders with the edge line painted in a shoulder rumble strip to increase visibility and alert drivers when their vehicles stray from the travel lane, converting 4-lane undivided roads to 3-lane roads with a dedicated left-turn lane to simplify turning movements, encouraging the use of more dedicated left turn indications (arrows) on traffic signals on high-speed roads, installing larger street name signs, replacing warning signs with ones that have a fluorescent yellow background to increase visibility, converting to Clearview fonts on Interstate signs for increased sign demonstrating older driver and pedestrian-friendly enhancements on a roadway corridor in Des Moines, and promoting local implementation of roadway improvements to benefit older drivers by providing training to city and county engineers and planners. The Transportation Safety Work Group of the Older Californian Traffic Safety Task Force provided engineering support in updating California’s highway design and traffic control manuals to incorporate FHWA’s recommended practices for making travel safer and easier for older drivers. Technical experts from the work group coordinated with the Caltrans design office in reviewing the Caltrans Highway Design Manual and updating elements related to older driver safety. Additionally, the work group managed an expedited process to have the California Traffic Control Devices Committee consider and approve modifications to signing and pavement marking standards in the California Manual on Uniform Traffic Control Devices that benefit older drivers. Data analysis—Developing tools to accurately capture accident data enables trends to be identified and resources to be directed to remediating problems. Iowa has a comprehensive data system that connects information from multiple sources, including law enforcement records (crash reports, traffic citations, truck inspection records) and driver license and registration databases, and can be easily accessed. For example, the system allows law enforcement officers to electronically access a person’s driving record and license information at a crash scene and enter their crash reports into the data system on-scene. Data captured through this process—including the location of all crashes—is less prone to error and can be geographically referenced to identify safety issues. In the case of older driver safety, several universities are utilizing Iowa crash data in research efforts. For example, University of Northern Iowa researchers utilized crash data and geospatial analysis to demonstrate how older driver crash locations could be identified and how roadway elements could be subsequently modified to improve safety for older drivers. University of Iowa researchers have used the data in behavioral research to study actions of older drivers and learn where changes in roadway geometrics, signing, or other roadway elements could assist older drivers with their driving tasks. Also, Iowa State University’s Center for Transportation Research and Education (CTRE) has used the data to study a number of older driver crash characteristics and supports other older driver data analysis research projects with the Iowa Traffic Safety Data Service. Florida is developing a Mature Driver Database (MDDB) that will collect several types of data—vision renewal data, crash data, medical review data—to be accessible through the Department of Highway Safety and Motor Vehicles (DHSMV) Web site. According to DHSMV officials, this database is intended to be used across agencies to facilitate strategic planning. DHSMV may use the database, for example, to track driver performance on screenings and analyze the effectiveness of screening methods. Planned MDDB enhancements include providing links to additional data sources such as census and insurance databases. Older driver safety is not a high-priority issue in most states and, therefore, receives fewer resources than other safety concerns. However, the aging of the American population suggests that older driver safety issues will become more prominent in the future. Some states—with federal support—have adopted practices to improve the driving environment for older road users and have implemented assessment practices to support licensing requirements for older drivers that are more stringent than requirements for younger drivers. However, information on the effectiveness of these practices is limited, and states have been reluctant to commit resources to initiatives whose effectiveness has not been clearly demonstrated. Some states have also implemented additional initiatives to improve older driver safety, such as establishing coordination groups involving a broad range of stakeholders and developing initiatives in the areas of strategic planning, education and outreach, assessment and licensing practices, engineering, and data analysis. NHTSA and FHWA also have important roles to play in promoting older driver safety, including conducting and supporting research on standards for the driving environment and on driver fitness assessment. While states hold differing views on the importance of older driver safety and have adopted varying practices to address older driver safety issues, it is clear that there are steps that states can take to prepare for the anticipated increase in the older driver population and simultaneously improve safety for all drivers. However, state resources are limited, so information on other states’ initiatives or federal efforts to develop standards for the driving environment and on driver fitness assessment practices could assist states in implementing improvements for older driver safety. To help states prepare for the substantial increase in the number of older drivers in the coming years, we recommend that the Secretary of Transportation direct the FHWA and NHTSA Administrators to implement a mechanism that would allow states to share information on leading practices for enhancing the safety of older drivers. This mechanism could also include information on other initiatives and guidance, such as FHWA’s research on the effectiveness of road design practices and NHTSA’s research on the effectiveness of driver fitness assessment practices. We provided a draft of this report to the Department of Health and Human Services and to the Department of Transportation for review and comment. The Department of Health and Human Services agreed with the report and offered technical suggestions which we have incorporated, as appropriate. (See app. III for the Department of Health and Human Services’ written comments.) The Department of Transportation did not offer overall comments on the report or its recommendation. The department did offer several technical comments, which we incorporated where appropriate. We are sending copies of this report to interested congressional committees. We are also sending copies of this report to the Secretary of Transportation and the Secretary of Health and Human Services. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or siggerudk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This report addresses (1) what the federal government has done to promote practices to make roads safer for older drivers and the extent to which states have implemented those practices, (2) the extent to which states assess the fitness of older drivers and what support the federal government has provided, and (3) what initiatives selected states have implemented to improve the safety of older drivers. To determine what the federal government has done to promote practices to make roads safer for older drivers, we interviewed officials from the Federal Highway Administration (FHWA) within the U.S. Department of Transportation (DOT) and the American Association of State and Highway Transportation Officials (AASHTO) and reviewed manuals and other documentation to determine what road design standards and guidelines have been established, the basis for their establishment, and how they have been promoted. We also reviewed research and interviewed a representative of the National Cooperative Highway Research Program (NCHRP) to gain perspective on federal initiatives to improve the driving environment for older drivers. Finally, to determine trends in accidents involving older drivers, we reviewed and analyzed crash data from the U.S. DOT’s Fatality Analysis Reporting System database and General Estimates System database. To obtain information on the extent to which states are implementing these practices, we surveyed and received responses from DOTs in each of the 50 states and the District of Columbia. We consulted with NCHRP, FHWA, and AASHTO in developing the survey. The survey was conducted from the end of September 2006 through mid-January 2007. During this time period, we sent two waves of follow-up questionnaires to nonrespondents in addition to the initial mailing. We also made phone calls and sent e-mails to a few states to remind them to return the questionnaire. We surveyed state DOTs to learn the extent to which they have incorporated federal government recommendations on road design elements into their own design guides and implemented selected recommendations in their construction, operations, and maintenance activities. We also identified reasons for state DOTs rejecting recommendations and determined the proportion of practitioners that were trained in each state to implement recommendations. In addition, we asked state DOTs to evaluate the extent to which they have developed plans (defined in Strategic Highway Safety Plans) and programmed projects (listed in Statewide Transportation Improvement Programs) for older driver safety as provided for by SAFETEA-LU legislation. Before fielding the questionnaire, we reviewed the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) and prior highway legislation to identify the framework for states to develop and implement older driver safety programs. Additionally, we conducted separate in-person pretests with officials from three state DOTs and revised our instrument as a result of the information obtained during those pretests. We took steps in developing the questionnaire and in collecting and analyzing the data to minimize errors that could occur during those stages of the survey process. A copy of the questionnaire and detailed survey results are available at www.gao.gov/cgi-bin/getrpt?GAO- 07-517SP. To determine the extent to which states assess the fitness of older drivers and what support the federal government has provided, we interviewed officials and reviewed relevant documents from the National Highway Traffic Safety Administration within the U.S. DOT, the National Institute on Aging and the Administration on Aging within the U.S. Department of Health and Human Services, and the American Association of Motor Vehicle Administrators—a nongovernmental organization that represents state driver licensing agencies. We determined the extent to which the guidelines and model programs of these agencies addressed the visual, physical, and cognitive deficits that may afflict older drivers. We also reviewed federal, state, and nongovernmental Web sites that contained information on states’ older driver licensing practices and analyzed their content so that we could compare practices across states. To obtain information on the activities of partner nongovernmental organizations in researching and promoting practices to assess older driver fitness, among other initiatives, we interviewed officials from AAA, AARP, the Insurance Institute for Highway Safety, and the Governors Highway Safety Association. To learn of states’ legislative initiatives concerning driver fitness assessment and licensing, we interviewed a representative of the National Conference of State Legislatures. We also interviewed officials from departments of motor vehicles in select states to report on their efforts in developing, implementing, and evaluating older driver screening and licensing programs. To obtain information on initiatives that selected states have implemented, we conducted case studies in six states—California, Florida, Iowa, Maryland, Michigan, and Oregon—that transportation experts identified as progressive in their efforts to improve older driver safety. We chose our case study states based on input from an NCHRP report highlighting states with leading practices in the areas of: education/awareness, assessment/licensing, engineering, agency coordination, strategic planning and data analysis. We compared practices across the six states to identify common themes. We also identified and determined, to the extent possible, key practices based on our analysis. The scope of our work focused on older driver safety. Prior GAO work addressed the associated issue of senior mobility for those who do not drive. We conducted our review from April 2006 through April 2007 in accordance with generally accepted government auditing standards. We requested official comments on this report from the U.S. Department of Transportation and the U.S. Department of Health and Human Services. Tables 5 through 7 list older driver licensing requirements in effect in certain states. In addition to the individual named above, Sara Vermillion, Assistant Director; Michael Armes; Sandra DePaulis; Elizabeth Eisenstadt; Joel Grossman; Bert Japikse; Leslie Locke; Megan Millenky; Joshua Ormond; and Beverly Ross made key contributions to this report. | As people age, their physical, visual, and cognitive abilities may decline, making it more difficult for them to drive safely. Older drivers are also more likely to suffer injuries or die in crashes than drivers in other age groups. These safety issues will increase in significance because older adults represent the fastest-growing U.S. population segment. GAO examined (1) what the federal government has done to promote practices to make roads safer for older drivers and the extent to which states have implemented those practices, (2) the extent to which states assess the fitness of older drivers and what support the federal government has provided, and (3) what initiatives selected states have implemented to improve the safety of older drivers. To conduct this study, GAO surveyed 51 state departments of transportation (DOT), visited six states, and interviewed federal transportation officials. The Federal Highway Administration (FHWA) has recommended practices--such as using larger letters on signs--targeted to making roadways easier for older drivers to navigate. FHWA also provides funding that states may use for projects that address older driver safety. States have, to varying degrees, adopted FHWA's recommended practices. For example, 24 states reported including about half or more of FHWA's practices in state design guides, while the majority of states reported implementing certain FHWA practices in roadway construction, operations, and maintenance activities. States generally do not place high priority on projects that specifically address older driver safety but try to include practices that benefit older drivers in all projects. More than half of the states have implemented licensing requirements for older drivers that are more stringent than requirements for younger drivers, but states' assessment practices are not comprehensive. For example, these practices primarily involve more frequent or in-person renewals and mandatory vision screening but do not generally include assessments of physical and cognitive functions. While requirements for in-person license renewals generally appear to correspond with lower crash rates for drivers over age 85, the validity of other assessment tools is less clear. The National Highway Traffic Safety Administration (NHTSA) is sponsoring research and other initiatives to develop and assist states in implementing more comprehensive driver fitness assessment practices. Five of the six states GAO visited have implemented coordination groups to assemble a broad range of stakeholders to develop strategies and foster efforts to improve older driver safety in areas of strategic planning, education and awareness, licensing and driver fitness assessment, roadway engineering, and data analysis. However, knowledge sharing among states on older driver safety initiatives is limited, and officials said states could benefit from knowledge of other states' initiatives. |
The Javelin is a man portable, fire-and-forget, antitank weapon system composed of two major components—a command launch unit and a round, which is a missile sealed in a disposable launcher container. (See fig. 1.) For operation of the system, the round is mated with the launch unit, but the launch unit may also be used in a stand-alone mode for battlefield surveillance and target detection. The Army expects Javelin to defeat armored targets out to distances of 2,000 meters, during the day or night and in adverse weather. The Army completed development of the Javelin system in December 1993. However, operational testing showed that the system’s design did not meet operational suitability requirements. As a result, the Army made numerous design changes to the launch unit and round before the contractor initiated low-rate production in June 1994. The Javelin system has experienced significant cost increases since it was first approved. In the early 1990s, the Army made budget decisions that stretched Javelin’s procurement phase from 6 to 14 years. In addition, the end of the cold war caused the Army and Marine Corps to reduce Javelin’s procurement quantities. Combined, these actions increased the average cost of the launch unit to about 4.5 times its originally estimated cost and more than doubled the average cost of the round. To mitigate these cost increases, the Army is attempting to shorten the system’s procurement phase. Initially, the Army planned to shorten procurement from 14 to 11 years by using production, logistics, and multiyear savings to purchase Javelin systems earlier than planned. On February 13, 1996, the Army announced that Program Budget Decision 104 added $993 million of additional procurement funds for fiscal years 1999 through 2001 to reduce Javelin’s procurement phase to 9 years. As the program is currently planned, these funds allow the Army to complete fielding by fiscal year 2004. The Army also hopes to reduce Javelin’s cost by awarding two multiyear contracts—one in 1997 and another in 2000. Multiyear procurement is a method of acquiring up to 5 years’ requirements of a system with a single contract. The procurements help the government reduce costs and provide incentives to contractors to improve productivity by investing in capital facilities, equipment, and advanced technology. However, multiyear contracts decrease annual budget flexibility. The Congress and the Department of Defense (DOD) commit themselves to fund multiyear contracts through completion or pay any contract cancellation charges, which may be substantial. According to the President’s 1997 Budget, the Army and the Marine Corps plan to purchase 31,269 Javelin rounds and 3,264 command launch units. The Army’s share of the purchase is 26,600 rounds and 2,800 command launch units. The Marines Corps plans to acquire 4,669 rounds and 464 launch units. The Army has not demonstrated that Javelin’s design is sufficiently stable for a multiyear production contract. By awarding a multiyear production contract before the design has stabilized and the system has been thoroughly tested, the Army risks cost overruns and/or schedule delays that could more than offset the savings produced by the contract. Pursuant to 10 U.S.C. 2306b, a military service is authorized to award multiyear contracts for the purchase of weapon systems if certain criteria are met. These criteria include the requirement that the design of the system remain substantially unchanged during the period covered by the multiyear contract. If the government awards a multiyear contract for a weapon system with an unstable design, the government could lose its budget flexibility without corresponding cost savings because contract changes or termination costs may substantially increase the cost of the weapon system. Between the end of development in 1993 and the beginning of low-rate production in 1994, the Army made 39 design changes to correct reliability problems. Since 1994, the Army has made a number of changes to the system’s design to reduce production and logistics costs and expects to continue making changes through the beginning of full-rate production in 1997. Most of these changes are being incrementally incorporated into hardware produced under three low-rate production contracts. The contractor is continuing production while changes are developed and qualified. As changes are approved, the contractor incorporates them into units in the production process. The Army estimates it will spend approximately $49.4 million from fiscal year 1994 through fiscal year 1997 while Javelin is in low-rate production to redesign various Javelin components. These changes are expected to reduce production and logistics costs by $329 million. However, because redesigned components are added to the production line as they are developed and qualified, the contractor will produce at least one and sometimes two variations of the Javelin system during each of the three low-rate production runs. According to current schedules, the last planned changes will not be incorporated into the production line until after full-rate production begins in 1997 under the planned multiyear contract. Javelin tests conducted to date have identified the need for additional design changes. During the first 8 months of Javelin round assembly, the round contractor stopped final assembly twice so engineers could redesign components that failed during testing. In January 1996, warheads in missiles undergoing production verification tests failed to function properly. Engineers said the failures occurred after they made minor changes to the fuzing device’s electronics. However, the warhead failures stopped production for 4 weeks until a remedy could be identified and implemented. In April, the contractor stopped round assembly for 2 weeks when electrical problems in the restraint pin mechanisms of two missiles occurred during a limited user test. The problems prevented one missile from leaving the launch tube after the gunner pulled the trigger and caused another to dive into the ground shortly after launch. During this test, a third missile failed when a short occurred in a transistor. This missile also failed to leave the launch tube. Army officials said the restraint pin assembly has been modified to remedy the problems that occurred during the limited user test. The contractor is retrofitting already produced missiles with the new assembly. Other unscheduled design changes could also be necessary as the Army continues to test the Javelin system. Even though it is making over 50 separate changes to Javelin’s original design, the Army does not plan to conduct any operational tests of missiles with all of the design changes until after full-rate production begins under a multiyear contract. In the opinion of Army officials, technical tests and a limited user test provide adequate information on Javelin’s operational capability. However, technical tests are conducted under controlled conditions and the limited user test does not test hardware that incorporates all design changes. The military services are statutorily required to operationally test each major weapon system under realistic combat conditions to determine if the system is operationally effective and suitable for combat prior to entering full-rate production. The military services are also required by DOD regulation to retest equipment if the design changes materially after initial operational testing. Therefore, we believe the Army must ensure that the redesigned Javelin works as intended prior to any commitment to full-rate production. In our view, the best way to accomplish that would be to conduct additional operational tests using fully redesigned systems. The Javelin system that will enter full-rate production will be significantly different from the Javelin that the Army operationally tested in 1993. To correct reliability failures recognized during full-scale development, and to reduce the cost of producing and supporting Javelin, engineers are changing many major components of the system. Between the end of the early operational testing and the beginning of low-rate production, the Army made changes to the round’s guidance unit, fuzing mechanism, propulsion unit, control system, battery coolant unit, and launch tube assembly, as well as the launch unit’s detection device, optics, display screen, and software. The Army will make additional round and launch unit changes during low-rate production. According to project office estimates, about 35 percent of the command launch unit’s components and 23 percent of the round will be redesigned during low-rate production. While Javelin’s Chief Engineer agreed that the command launch unit the Army plans to produce during full-rate production will be significantly different from the original configuration, he said that the round changes will not be significant. However, tests of warheads and rounds from the first low-rate production line have already identified potentially serious problems. Before low-rate production began, engineers made changes to electronic components in the warhead fuzing device. When missiles incorporating the changes were fired, the warheads failed to function properly. Army officials considered this problem so serious that they stopped round assembly until engineers identified and implemented a solution. Another post-development change—buying a liner for the main charge warhead from a second source—also caused problems. The liner should collapse and form a jet capable of perforating armor. However, the new vendor’s liner formed a jet that was not compatible with other Javelin components. Project office engineers believe the jet would have degraded Javelin’s lethality. The engineers modified Javelin components to correct the problem. Army officials told us that technical tests will provide sufficient proof that Javelin is suitable for combat. However, these tests—which determine if redesigned hardware (1) performs its intended function, (2) is compatible with other components of the system, and (3) can withstand various environmental stresses—are conducted under controlled conditions. Some technical tests are planned by the contractor and conducted at its facility. Even if tests are controlled by the government, test officials try to control as many variables as possible. For example, an Army operational test official said that during technical tests, trained technicians handle the equipment and follow precise guidelines. According to one DOD systems analyst, hardware may be sufficiently reliable to pass required technical tests, but still lack the endurance needed for battlefield conditions. The Army and the Marine Corps are jointly conducting one limited user test of Javelin prior to full-rate production. However, this test will not provide data that the Army can use to assess the suitability of the full-rate production configuration of Javelin. Soldiers participating in the test are using command launch units and rounds coming off the first low-rate production line that do not include all planned cost reduction changes. The Army does not plan to operationally test the system with all changes until 1998, over a year after the Army makes its decision to begin Javelin full-rate production. DOD requires that before Javelin proceeds into full-rate production, flight tests must prove the round is 82 percent reliable. According to the Army, tests conducted through June 19, 1996, demonstrated the round should perform as designed 81.5 percent of the time. However, some of the tests used to predict reliability could have potentially inflated the reliability score. By the end of May 1996, the Army had completed 22 planned test flights under controlled test conditions. The Army did not score five of the tests for reliability because the tests did not meet the Army’s criteria for a valid reliability test or the purpose of the flights was to assess round safety. Of the 17 scored tests, 2 were failures. In one test, the missile overflew its target; in another, the missile did not leave the launch tube because its launch motor did not fire. The Army planned to fire six more rounds as part of a limited user test. However, after three failures, Javelin’s Project Manager halted the tests to determine the cause of the failures and, if required, make design modifications. When flight tests were halted, 75 percent of all rounds tested had functioned as intended upon launch. Before resuming the limited user test, the Army modified a missile component and completed 12 unplanned controlled test flights to verify performance of the design change. Of the 12 flights, 10 were successful. With the design deficiency corrected, the Army resumed the limited user test and successfully fired six rounds. According to the Army, considering the results of all 38 scored tests, 81.5 percent of the rounds tested met established reliability criteria. However, the last 18 tests may not be useful for predicting reliability because the Army used a method of selecting the missiles for these tests that potentially could have affected the test outcome and inflated the reliability score. Army officials carefully screened the production records of the missiles selected for the 12 controlled test flights and the 6 final limited user tests. Only missiles that the Army was highly confident would perform as designed were retained for testing. Test officials said about one-third of the missiles were eliminated from the sample. The Army does not agree that the 18 tests are not useful for assessing reliability. Project officials said the purpose of screening the missiles before testing them was to ensure that the latest configuration was being tested, that subsystem performance specifications were met, and to review the manufacturing and assembly process. They acknowledged, however, that these actions increased the likelihood that the tests would be successful. The officials said that they do not believe the screening process prejudiced test results. They said that since the completion of the limited user test, they have either tested or performed a second review of the production records for all eliminated rounds. As a result, the officials said they believe some missiles were needlessly eliminated from the sample. However, if a test or second production review indicated an eliminated missile was defective, all missiles at the contractor’s facility were screened for similar deficiencies. In addition, Javelin’s Project Manager said that rounds tested during lot acceptance test scheduled for October will be randomly chosen and should further prove the round’s reliability. The Army plans to replace all 277 launch units manufactured under the 3 low-rate production contracts about 3 years after they are produced. The Army is redesigning the command launch unit to reduce production and logistics costs, and plans to replace all the original production units because it cannot afford to maintain two configurations of the launch unit. To minimize replacement costs, the Army could reduce quantities to be produced under its third low-rate production contract to a minimum level of production. During low-rate production, the Army is redesigning the launch units’ electronics and housing and adding built-in-test equipment that it estimates will reduce each unit’s procurement cost an average of $14,590 and total logistics cost by $45.1 million. The contractor will not begin producing launch units with all the changes incorporated until 1997. Javelin’s Chief of Logistics said the Army cannot afford to maintain both the low-rate production and redesigned launch unit configurations. He said that if soldiers were given different launch units, the Army would have to maintain inventory and train personnel to repair both configurations. In addition, the Army would have to develop and produce test equipment for the low-rate production configuration because it will not have built-in-test equipment to diagnose system failures. Before the Army awarded the third low-rate production contract in February 1996, we expressed concern about the Army’s plan to produce launch units at a relatively high rate and then replace them only 3 years after the units are fielded. The Deputy Director of DOD’s Land Warfare Office, which is responsible for Javelin oversight, asked the Javelin Project Manager to delay contract award until his office and the project office could determine if actions could be taken to minimize replacement costs. Despite the request, the Project Manager awarded the contract. He later explained that reducing Javelin production would delay fielding to infantry battalions that urgently need an improved antiarmor system. However, officials in the Office of the Secretary of the Army for Research, Development, and Acquisition said Javelin is not needed to address an urgent threat as it was before the decline of the Warsaw Pact nations, but rather will be used to improve overall warfighting capability. The Army can still modify the third low-rate production contract to purchase as few as 36 launch units because the contractor has not begun assembly of the units and the level of production required to keep the manufacturing facility running is 3 units per month, or 36 units per year.The contract, when originally awarded on February 29, 1996, called for production of 125 units at a cost of about $29 million. According to project office cost officials, reducing the purchase to 36 launch units would decrease the contract cost by $18.5 million. But, the officials said that purchasing fewer launch units will increase the per unit cost of the remaining units because the contractor has already purchased materials and incurred costs in anticipation of production. However, they agreed that some of the materials could be used during future production contracts. In addition, the Army is already decreasing the number of command launch units being purchased under the contract. The Army has already decided to cancel production of 17 of these units and may cancel production of another 12 if 1 infantry battalion returns the 12 launch units that battalion borrowed to participate in the Army’s Advanced Warfighting Experiment. According to Army estimates, the changes in the Javelin weapon system should result in a more effective, less expensive weapon. However, the Army risks these gains by accelerating production and committing to a multiyear contract before it has demonstrated that the system’s design is stable and operational tests prove the redesigned system is suitable for combat. The Army has already increased system cost by purchasing launch units in relatively large quantities before all design changes were incorporated. But replacement cost can be reduced somewhat by modifying the third low-rate production contract to purchase fewer launch units. Therefore, we recommend that the Secretary of Defense direct the Army to (1) award annual (vice multiyear) Javelin contracts for the minimum quantity needed to sustain production until the Army demonstrates that the system’s design is stable, (2) operationally test the redesigned Javelin before proceeding to full-rate production, and (3) modify the third low-rate production contract to reduce command launch unit production from 125 to the contractor’s minimum production level of 3 units per month or 36 total units. We obtained written comments on a draft of this report from DOD (see app.I). DOD disagreed with our recommendation that the Secretary of Defense direct the Army to award annual Javelin contracts for the minimum quantity needed until the Army demonstrates that the design of Javelin is stable. While DOD agreed that Javelin has undergone a large number of design changes, in their opinion the stability of the design has been verified through successful production verification testing and limited user testing. However, production verification testing for the Javelin configuration that the Army will produce during full-rate production is not complete and full-rate production representative items have not been subjected to any type of operational test. Until the tests are successfully completed and the stability of Javelin’s design is demonstrated in production, the Army cannot be certain Javelin’s design is stable. DOD agreed that the redesigned Javelin should be operationally tested before proceeding to full-rate production. Before a decision is made in May 1997 to begin Javelin full-rate production, the Army will complete an operational test program with production representative hardware. DOD did not agree that the third low-rate production contract should be modified to reduce the command launch unit production from 125 units to 36 units. DOD commented that the (1) currently deployed Dragon antiarmor system cannot effectively engage or destroy modern armor; (2) savings of reducing the purchase to 36 units will be only $10 million—not the $18.5-million reduction in contract cost—if parts salvaged from low-rate production units can be used as repair parts; and (3) cost of replacing units produced during low-rate production is more than offset by the benefits of having Javelin in the contingency forces. Although we agree that Javelin should improve the Army and the Marine Corps’ warfighting capability, Army officials told us that there is no longer an urgent need for Javelin as there was before the decline of the Warsaw Pact nations. Without an urgent need, the Army should purchase only the quantity of command launch units required to keep the manufacturing facility running. We continue to believe that the Army should not pursue a multiyear production contract for Javelin at this time and should reduce the number of launch units procured under the third low-rate production contract. Therefore, we suggest that the Congress consider requiring that the Army (1) award annual (instead of multiyear) Javelin contracts for the minimum quantity needed to sustain production until the Army demonstrates that the system’s design is stable and (2) reduce the command launch unit production to the contractor’s minimum production level of three units per month. We reviewed the Army’s justification for a multiyear contract and discussed multiyear criteria with officials in the Army’s Javelin Project Office, Redstone Arsenal, Alabama, and the U.S. Marine Corps Ground Weapons System, Quantico, Virginia. We also obtained information on quantity requirements and Javelin’s design stability from the Army Office of the Deputy Chief of Staff for Operations and Plans, Washington, D.C., and the Army Material Systems Analysis Activity, Aberdeen, Maryland. To determine the adequacy of planned system testing, we obtained and reviewed test plans and reports from the Javelin Project Office. We discussed Javelin testing with project office officials and officials from the Army Operational Test and Evaluation Command, Alexandria, Virginia; the Office of the Director, Operational Test and Evaluation, Washington, D.C.; and the Army Material Systems Analysis Activity, Aberdeen, Maryland. To assess the Army’s decision to purchase launch units, we evaluated production and fielding plans and held discussions with officials in the Javelin Project Office; the Army Missile Command Acquisition Center, Redstone Arsenal, Alabama; the Office of the Secretary of the Army (Research, Development, and Acquisition), Washington, D.C.; and the Office of the Under Secretary of Defense for Acquisition and Technology, Washington, D.C. We conducted our review from December 1995 to June 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense, the Army, and the Navy; the Commandant of the Marine Corps; and the Director of the Office of Management and Budget. Copies will also be made available to others upon request. If you or your staff have questions concerning this report, please contact me at (202) 512-4841. The major contributors to this report were Lee Edwards, Barbara Haynes, and John Randall. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated August 16, 1996. 1. DOD provided comments on the technical accuracy of the report. We have reviewed DOD’s suggestions and made changes as appropriate. 2. Based on new information provided by DOD as a result of its review of our report, we no longer question the stability of the Javelin quantities the Army and the Marine Corps will purchase during the multiyear contract. At the time of our audit, the Marine Corps had not formalized their plans to reduce their purchase of Javelin rounds and it appeared likely that quantities could be reduced during the period of the multiyear contract. With DOD’s assurance that the Marine Corps’ reductions will be known before the multiyear contract is awarded and that the Army anticipates no changes in their requirements, we have removed information regarding this issue from the report. 3. Javelin’s design has been in transition since it was operationally tested in 1993. Each production of Javelin through the first year of full-rate production will produce a different configuration of the system. The Army has not completed technical and operational tests of Javelin with all design changes incorporated. In addition, early tests have shown that some changes require additional redesign. By delaying the multiyear contract until the Army has successfully tested Javelin’s design and the design’s stability is demonstrated by production, the government can reduce the risk that additional redesign will reduce or eliminate multiyear cost savings. 4. We agree that the Javelin should be a significant improvement over the aging Dragon system. However, because there is no urgent threat, we believe that the Army should reduce their third low-rate production contract to purchase only the minimum quantity necessary to keep the manufacturing facility running. This will minimize the costs of replacing these launch units with redesigned units. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the Army's procurement of the Javelin missile system, focusing on whether the: (1) system meets established criteria for multiyear production contracts; (2) Army adequately tested the system to determine its suitability for full-rate production; and (3) Army's purchase of command launch units during limited production is appropriate. GAO found that the Army: (1) has not demonstrated that the Javelin's design is sufficiently stable for multiyear production; (2) does not plan to conduct operational testing of the missile until after full-rate production begins; (3) has extensively redesigned the system since it was operationally tested in 1993; (4) believes that its planned testing of the system will be adequate; (5) has conducted only limited testing, which may not be useful for predicting the system's reliability; and (6) could acquire fewer units under its low-rate initial production contract and still sustain the contractor's ability to produce the system. |
The just compensation clause of the Fifth Amendment states “nor shall private property be taken for public use, without just compensation.” Initially, this clause applied to the government’s exercise of its power of eminent domain. In eminent domain cases, the government invokes its eminent domain power by filing a condemnation action in court against a property owner to establish that the taking is for a public use or purpose, such as the construction of a road or school, and to have the amount of just compensation due the property owner determined by the court. In such cases, the government takes title to the property, providing the owner just compensation based on the fair market value of the property at the time of taking. In later years, Supreme Court decisions established that regulatory takings are subject to the just compensation clause as well. In contrast to the direct taking associated with eminent domain, regulatory takings arise from the consequences of government regulatory actions that affect private property. In these cases, the government does not take action to condemn the property or offer compensation. Instead, the government effectively takes the property by denying or limiting the owner’s planned use of the property, referred to as an inverse taking. An owner claiming that a government action has effected a taking and that compensation is owed must initiate suit against the government to obtain any compensation due.The court awards just compensation to the owner upon concluding that a taking has occurred. In 1987, concerned with the number of pending regulatory takings lawsuits and with court decisions seen as increasing the exposure of the federal government to liability for such takings, the President’s Task Force on Regulatory Relief began drafting an executive order to direct executive branch agencies to more carefully consider the takings implications of their proposed regulations or other actions. According to a former Assistant Attorney General, this order was needed to protect public funds by minimizing government intrusion upon private property rights and to budget for the payment of just compensation when such intrusions were inevitable. The President issued this order, EO 12630, on March 15, 1988. According to the EO, actions subject to its provisions include regulations, proposed regulations, proposed legislation, comments on proposed legislation, or other policy statements that, if implemented or enacted, could cause a taking of private property. Such actions may include rules and regulations that propose or implement licensing, permitting, or other conditions, requirements or limitations on private property use. The EO also enumerates agency actions that are not subject to the order, including the exercise of the power of eminent domain and law enforcement actions involving seizure, for violations of law, of property for forfeiture, or as evidence in criminal proceedings. Among other things, the EO requires the U.S. Attorney General to issue guidelines to help agencies evaluate the takings implications of their proposed actions, and, as necessary, to update these guidelines to reflect fundamental changes in takings case law resulting from U.S. Supreme Court decisions. The Attorney General issued these guidelines on June 30, 1988, to establish a basic, uniform framework for federal agencies to use in their internal evaluations of the takings implications of administrative, regulatory, and legislative policies and actions. In addition, the guidelines discuss agency responsibilities for implementing the EO and the process for preparing agency-specific supplemental guidelines. The Attorney General’s guidelines provide that agencies should assess the takings implications of their proposed actions to determine their potential for a compensable taking and that decision makers should consider other viable alternatives, when available, to meet statutorily required objectives while minimizing the potential impact on the public treasury. In cases where alternatives are not available, the potential takings implications are to be noted, such as in a notice of proposed rulemaking. The guidelines also state that takings implication assessments are internal, predecisional management aids and that they are not subject to judicial review. In addition, the form and manner of these assessments are left up to each agency. The guidelines also include an appendix that provides detailed information regarding some of the case law surrounding consideration of whether a taking has occurred and the extent of any potential just compensation claim. For example, the appendix discusses the Penn Central Transportation Co. v. City of New Yorkcase in which the Supreme Court set out a list of three “influential factors” for determining whether an alleged regulatory taking should be compensated: (1) the economic impact of the government action, (2) the extent to which the government action interfered with reasonable investment-backed expectations, and (3) the “character” of the government action. However, the appendix provides a caveat that it is not intended to be an exhaustive account of relevant case law, adding that the consideration of the potential takings of an action as well as the applicable case law will normally require close consultation between agency program personnel and agency counsel. In addition to requiring guidelines, the EO requires OMB to ensure that the policies of executive branch agencies are consistent with the EO’s principles, criteria, and requirements. For example, for proposed regulatory actions subject to OMB review, agencies are required to include a discussion summarizing the potential takings implications of these actions in their submissions to OMB. The EO also requires OMB to ensure that all takings awards levied against the agencies are properly accounted for in agencies’ budget submissions. Despite the existence of the EO, some Members of Congress hold the view that the enforcement of the just compensation clause with respect to regulatory takings is inadequate and that statutory measures are needed to reduce the infringement on private property rights resulting from government regulation and to ensure compensation in the event of such infringement. For example, a variety of legislation has been proposed in Congress over the past 10 years to achieve those goals. In general, according to a study prepared by the Congressional Budget Office, these bills included measures that would (1) increase the requirements for analysis and reporting that federal agencies must meet before making decisions that could restrict the uses of private property, (2) relax the procedural requirements that must be satisfied before a federal court will hear the merits of a takings claim, and (3) require that the budget of an agency whose action triggers a regulatory compensation claim be the source of any compensation awarded. Although property rights advocates have supported these legislative initiatives, others, including some environmental groups, have questioned the need for legislation and voiced the view that the consideration of the takings potential of an agency action should not impede the government’s ability to protect the environment or provide other societal benefits. Justice has not updated the general guidelines that it issued pursuant to the EO in June 1988 for evaluating the risk of and avoiding regulatory takings, but it has issued supplemental guidelines for three of the four agencies. Officials at Justice and two of the four agencies said that changes in takings case law related to Supreme Court decisions made since 1988 have not been significant enough to warrant a revision of the general guidelines. Justice officials also noted that because the guidelines provide a general framework for agencies to follow in implementing the EO, they do not require frequent revision. However, Interior and Agriculture officials said that it would be helpful to their staffs if Justice updated a summary of the key aspects of relevant case law contained in an appendix to the guidelines to reflect significant developments in the past 15 years. Similarly, some law professors and representatives of property rights groups noted that the body of relevant case law has evolved significantly over the past 15 years, requiring an update to the guidelines. Regarding supplemental guidelines, Justice has issued these guidelines for three of the four agencies, but has not done so for Agriculture. According to Justice and Agriculture officials, Agriculture’s supplemental guidelines went through several drafts in the early 1990s, but were never completed because the two agencies disagreed on issues such as how to assess the takings implications of changes in grazing and special use permits. However, Justice and Agriculture officials told us that Agriculture’s compliance with the EO has not been encumbered by the agency’s lack of supplemental guidelines. Agency officials and other experts differ on the need to update the Attorney General’s guidelines to reflect changes in regulatory takings case law since 1988. Justice officials said the guidelines have not been updated since 1988 because there have been no fundamental changes in regulatory takings case law, the EO’s criterion for an update. They said that the guidelines, as written, still cover the main issues in determining the risk of a regulatory taking and that subsequent Supreme Court decisions have not substantially changed this analysis. For example, these officials said the three-factor test outlined in the 1978 Penn Central case remains the most important guidance for analyzing the potential for a taking that is subject to just compensation. Justice officials also emphasized that the guidelines address only a general framework for agencies’ evaluations of the takings implications of their proposed actions and thus are not intended to be an up-to-date, comprehensive primer on all possible considerations. The guidelines state that the individual agencies must still conduct their own evaluations, including necessary legal research, when assessing the takings potential of a proposed regulation or action. Two of the four agencies supported Justice’s position that the guidelines do not need to be updated. Officials at the other two agencies expressed the view that an appendix to the guidelines that summarizes key regulatory takings case law should be updated. Regarding agencies that supported Justice’s position, Corps of Engineers staff indicated that based on their review of relevant Supreme Court decisions since 1988, there has been no fundamental change in the criteria for assessing potential takings and thus no update to the Attorney General’s guidelines is necessary. Similarly, EPA staff said that some of the takings cases decided since 1988 gave the appearance that the Court was changing the three-pronged test set out in the Penn Central decision. However, these officials noted that more recent cases have returned to the Penn Central test, thereby removing the need for updating the Attorney General’s guidelines. In contrast, officials at Interior and Agriculture said that it would be helpful if Justice updated the summary of key takings cases contained in an appendix to the guidelines to reflect significant developments in case law over the past 15 years. Other legal experts also said that the Attorney General’s guidelines should be updated, noting that regulatory takings case law has not remained static over the past 15 years. For example, a Congressional Research Service attorney who has written extensively on the issue of regulatory takings said that the guidelines should be updated to reflect more recent Supreme Court decisions. This attorney noted that while the EO does not define a “fundamental” change regarding the need for an update, a number of important cases have been decided since the guidelines were issued. For example, the attorney pointed to the Lucas v. South Carolina Coastal Council decision of 1992 concerning a state ban on the development of beachfront property. This attorney noted that this case laid out a categorical exception to the Penn Central test for regulations that deny a property owner all economically viable use of the owner’s lands. The attorney stated that Lucas made new law in clarifying when, notwithstanding a denial of all economically viable use, there is no taking. Similarly, other legal experts concerned with the protection of private property rights said that there have been significant developments in regulatory takings case law since 1988. These experts also cited Lucas and other cases and said that these cases further develop and/or limit the application of the three-pronged test outlined in the Penn Central case. These experts said that the mere passage of time and the sheer number of regulatory takings cases concluded since 1988 argue for updating the guidelines. In addition, one of these experts, a law professor who has written and lectured on the issue of regulatory takings, said that the level of specificity with which Justice prepared the original guidelines sets a precedent. This expert explained that there have been many important changes in regulatory takings case law since 1988 and that the guidelines should be updated to reflect these changes given the detailed manner in which the original guidelines were prepared. At the same time, another legal expert, an attorney from an environmental research group, indicated that the guidelines might not require updating. In general, this attorney said that regulatory takings cases concluded since 1988 reaffirm the three-pronged test in the Penn Central case. According to this attorney, the Lucas case was initially thought to be more significant, but more recently it has been read and interpreted more narrowly by the courts and therefore does not constitute a fundamental change in the law. Appendix II provides a summary of Supreme Court regulatory takings cases decided since 1988 that were cited as being important by officials we contacted or in the relevant literature and that may be appropriate for inclusion in the guidelines. The Attorney General has issued supplemental guidelines required by the EO for three of the four agencies—the Corps of Engineers, EPA, and Interior. Although several attempts were made to draft supplemental guidelines for Agriculture in the early 1990s, the Attorney General did not finalize and issue these guidelines because of unresolved issues.However, Justice and Agriculture officials indicated that the latter agency’s lack of supplemental guidelines has not hindered its compliance with the EO. The EO directed the Attorney General, in consultation with each executive branch agency, to issue supplemental guidelines for each agency as appropriate to the specific obligations of that agency. The Attorney General’s guidelines state that the supplement should prescribe implementing procedures that will aid the agency in administering its specific programs under the analytical and procedural framework presented in the EO and the Attorney General’s guidelines, including the preparation of takings implication assessments. In general, for certain agency actions, the three agencies’ supplemental guidelines include specific categorical exclusions from the EO’s provisions. For example, Interior’s guidelines exclude its nonlegislative actions to which the affected property owners have consented; regulations or permits authorizing the taking, possession, transportation, or use of migratory birds or wildlife; biological opinions issued pursuant to the Endangered Species Act under certain conditions; listings of certain species under the Endangered Species Act; and denial of permits to import species into or export species from the United States. Similarly, the Corps of Engineers’ guidelines exclude its denials “without prejudice” (i.e., the applicant can apply again) of Clean Water Act section 404 permits, because these denials are not considered substantive decisions. In addition, EPA’s guidelines exclude its actions related to the transportation, storage, disposal, registration, distribution, and use of pesticides; protection of public water systems and underground sources of drinking water; control of emissions of air pollutants; disposal of hazardous, solid, and medical waste; and control of actual or threatened releases of hazardous substances or pollutants or contaminants. The Attorney General has not issued supplemental guidelines for Agriculture because Justice and Agriculture could not reach agreement on how to assess the potential takings implications of the latter agency’s actions related to grazing and special use permits covering applicants’ use of public lands. In this regard, Agriculture officials said that because the agency issues, modifies, or denies literally thousands of grazing and special use permits every year, the agency was concerned about the resource implications of having to do a takings implication assessment in each case. In addition, in Agriculture’s view, the granting of a permit for the use of public lands does not convey “property rights” to the permit recipient, and thus agency actions to condition or deny such a permit do not constitute a potential taking. Accordingly, Agriculture argued that these permit actions should be excluded from the EO’s requirements or, if not, that the agency be allowed to do a generic takings implication assessment that would apply to multiple permits. Agriculture officials indicated that Justice officials did not agree with these suggestions, and the matter was never resolved. According to Agriculture officials, this lack of resolution resulted, in part, because of ongoing litigation against Agriculture alleging a taking related to the agency’s denial of a grazing permit and changing priorities related to the arrival of a new administration in 1993. Despite Agriculture’s lack of supplemental guidelines, agency officials said that their implementation of the EO and the Attorney General’s guidelines has not been encumbered. Justice officials agreed with this assessment. Although the EO’s requirements have not been amended or revoked since 1988, the four agencies’ implementation of some of its key provisions has changed over time because of subsequent guidance provided by OMB. For example, the agencies no longer prepare annual compilations of just compensation awards or account for these awards in their budget documents because OMB issued guidance in 1994 advising agencies that this information is no longer required. According to OMB, this information is not needed because the number and amount of these awards is small and the awards are paid from the Department of the Treasury’s Judgment Fund, rather than from the agencies’ appropriations. Each of the four agencies has designated an official—typically the chief counsel, general counsel, or solicitor—to be responsible for ensuring the agency’s compliance with the EO. Finally, the four agencies told us that they fully consider the potential takings implications of their planned regulatory actions, but provided us with limited documentary evidence to support this claim. The EO requires each executive branch agency to submit annually to OMB and Justice an itemized compilation report of all just compensation awards entered against the United States for regulatory takings related to the agencies’ activities. The EO also requires that agencies include information on these awards in their annual budget submissions. However, at present, the agencies are not complying with these provisions because of guidance provided by OMB. Regarding annual compilations of just compensation awards, OMB first provided guidance on the form and content of compilations in its Circular A-11, issued in June 1988. However, in a subsequent version of this circular issued in July 1994, OMB advised agencies that the submission of this information is no longer necessary. According to OMB officials, this information is not needed because just compensation awards or settlements related to regulatory takings cases do not affect agency budgets but are paid from the Department of the Treasury’s Judgment Fund. Furthermore, OMB and Justice officials said that because the number of just compensation awards and settlements paid by the federal government annually and the total dollar amount of these payments are relatively small, the overall budget implications for the government are small. Hence, these officials said the annual reporting of just compensation awards was unnecessary. OMB officials offered similar reasons for not requiring agencies to include information on just compensation awards in their annual budget documents. Although OMB no longer requires agencies to comply with these EO provisions, the provisions remain in the EO. However, OMB and Justice officials noted that because the provisions of executive orders are not the equivalent of statutory requirements, not complying with these provisions does not have the same implications. Instead, executive orders are policy tools for the executive branch and are subject to changing interpretation and emphasis with each new administration. Furthermore, these officials said that the relative lack of regulatory takings cases and associated just compensation awards each year is an indication that the EO has succeeded in raising agencies’ awareness of the need to carefully consider the potential takings implications of their actions, even if subsequent OMB guidance has excused the agencies from some of the EO’s provisions. Each of the four agencies has designated an official to be responsible for ensuring that the agency’s actions comply with the EO’s requirements. In general, the responsible official at each agency is the agency’s senior legal official. EPA’s and Interior’s supplemental guidelines specifically identify the designated official by title. Concerning Agriculture and the Corps of Engineers, we did not find written evidence of this designation, although agency officials assured us that their senior legal official fulfilled this role. Justice officials indicated that the designated official at each of the four agencies is effectively performing the compliance assurance and liaison functions required by the EO. However, as a practical matter, staff attorneys, in consultation with relevant program officials, determine the potential takings implications of an agency’s planned actions. The four agencies said that they fully consider the potential takings implications of their planned regulatory actions, but provided us with limited documentary evidence to support this claim. Officials at each of the four agencies indicated that the requirements of the EO and the provisions of the Attorney General’s guidelines primarily guide their consideration of the takings potential of agency actions. Officials at the Corps of Engineers, EPA, and Interior also cited the Attorney General’s supplemental guidelines for each agency as being important, particularly for identifying agency-specific exclusions to the EO’s provisions. For example, EPA officials indicated that their agency performs relatively few takings implication assessments because most of its actions are excluded from the provisions of the EO, as enumerated in its guidelines. These officials explained that EPA’s program responsibilities generally do not include land management, and in past lawsuits alleging regulatory takings that involved EPA, another federal agency usually took the action giving rise to the takings claim, and EPA typically served as an advisor or consultant to that agency. Officials at three of the agencies—Agriculture, the Corps of Engineers, and Interior—also said that their agency has provided relevant internal guidance. For example, an Agriculture internal regulation on rulemaking requires implementation of the EO, including the preparation of takings implication assessments, as appropriate. Similarly, the Corps’ Chief Counsel issued internal guidance in a memo that addresses legal analyses and takings implication assessments related to wetland and other permit decisions. For Interior, the agency’s departmental manual requires that it assess the potential takings implications of planned rulemakings before they are published in the Federal Register. Agencies provided us a few written examples of takings implication assessments. Agency officials said that these assessments are not always documented in writing, and, because of the passage of time, those assessments that were put in writing may no longer be on file. They also noted that these assessments are internal, predecisional documents that generally are not subject to the Freedom of Information Act or judicial review; thus they are not typically retained in a central file for a rulemaking or other decision, and therefore they are difficult to locate. For example, the Corps of Engineer’s internal guidance memo states that takings implication assessments should be removed from the related administrative file once the agency has concluded a decision on a permit. In addition, agency officials also noted that they do not maintain a master file of all takings implication assessments. For example, in many cases, attorneys assigned to field offices conduct these assessments. In these cases, agency officials said that headquarters staff may not have copies. Nevertheless, with the exception of EPA, each agency provided us with some examples of written takings implication assessments. These assessments varied in form and the level of detail included. We also had difficulty independently verifying the four agencies’ preparation of takings implication assessments from the information contained in Federal Register notices related to their proposed and final rulemakings. Specifically, 375 notices mentioned the EO in 1989, 1997, and 2002, but relatively few provided an indication as to whether a takings implication assessment was done. Most of these rules included only a simple statement that the EO was considered and, in general, that there were no significant takings implications. In contrast, 50 specified that an assessment of the rule’s potential for takings implications was prepared, and of these, 10 noted that the rule had the potential for “significant” takings implications. Table 1 summarizes this information. In addition, appendix III provides more detailed information on these rules. Given the limited amount of information available from the agencies or available from the Federal Register notices we reviewed, we could not fully assess the extent to which the EO’s requirements were fully considered by the agencies. According to Justice data, 44 regulatory takings cases brought against the four agencies were concluded during fiscal years 2000 through 2002. Of these cases, the courts decided in favor of the plaintiff in 2 cases, resulting in awards of just compensation totaling about $4.2 million. The Justice Department settled in 12 other cases, providing total payments of about $32.3 million. Of these 14 cases with awards or settlements payments, 10 related to actions of Interior, 3 to actions of the Corps of Engineers, and 1 to an action of Agriculture. However, the EO’s requirements for assessing the takings implications of planned regulatory actions applied to only 3 of these 14 cases. For the other 11 cases, the associated regulatory action either predated the EO’s issuance or the matter at hand was otherwise excluded from the EO’s provisions. Based on available evidence, we found that the relevant agency assessed the takings potential of its action in only 1 of the 3 cases subject to the EO’s requirements. As of the end of fiscal year 2002, Justice reported that 54 additional regulatory takings cases involving the four agencies were pending resolution. Fourteen of 44 regulatory takings cases involving the four agencies and concluded during fiscal years 2000 through 2002 resulted in government payments, according to Justice data. The U.S. Court of Federal Claims awarded payment of just compensation in 2 cases for a sum totaling about $4.2 million. Justice settled the remaining 12 cases, for a sum totaling about $32.3 million.In general, the cases settled were concluded with compromise agreements, including stipulated dismissals or settlement agreements, reached among the litigants and approved by the applicable court. In these cases, the agreement usually provides that the parties have agreed to end the case with a payment to the plaintiff, but no finding that a taking occurred. For example, in one case concluded in 2001 that alleged a taking of an oil and gas lease on federal land managed by Interior’s Bureau of Land Management, the litigants negotiated a stipulated dismissal that provided that a payment of $3 million be made to the plaintiffs. This payment was to cover all claims made by the plaintiffs in the case. However, the stipulated dismissal also provided that the final outcome should not be construed as an admission of liability by the United States government for a regulatory taking. In addition, the dismissal required that the plaintiffs surrender their interests in a portion of the lease. In the 2 cases with award payments, the court concluded that a taking had occurred and thus it awarded just compensation. Of these 14 cases with awards or settlement payments, the 10 Interior cases generally dealt with permits related to mining claims on federal lands managed by that agency or matters related to granting access on public lands. For example, one case involving mining claims resulted in the plaintiff receiving a settlement of almost $4 million. In another case, involving the denial of preferred access to a lake on land managed by the agency, the plaintiff received a settlement of $100,000. The three Corps’ cases generally related to its denial or issuance with conditions of wetlands permits for private property. One of these cases, concerning the filling of a wetland in Florida, resulted in a settlement payment of $21 million, accounting for more than half of the total compensation awards and settlement payments related to the 14 cases. The single Agriculture case concerned the title to mineral rights in a national forest managed by the agency. The plaintiff received an award of $353,000 in this case. Table 2 provides a breakout by agency on the number of cases and the amount of the award or settlement involved. In addition, appendix IV provides detailed descriptions of the particulars for each case. In addition to the cases concluded during fiscal years 2000 through 2002, Justice reported that an additional 54 regulatory takings cases involving the four agencies were still pending resolution at the end of fiscal year 2002. Based on information provided by the four agencies, only 3 of the 14 cases with payments were subject to the EO’s requirement to conduct a regulatory takings implication assessment. For the other 11 cases, the agency action involved either predated the EO’s issuance or was otherwise excluded from the EO’s requirements. Of the three cases subject to the EO’s requirements, we found evidence that a regulatory takings implication assessment had been done in only one instance. In that case, the Corps of Engineers denied a wetlands permit sought by the plaintiff to fill wetlands on the plaintiff’s property in order to develop a commercial medical center. The plaintiff brought suit alleging a compensable taking had occurred. In its takings implication assessment, the Corps had concluded that the permit denial did not constitute a taking because the applicant was still free to use the property for other purposes that did not involve filling the wetland. Therefore, the Corps concluded that the permit denial did not deprive the plaintiff of all viable economic use of the property. However, the case ended with a stipulated dismissal and a payment of $880,000 to the plaintiff. We provided a draft of this report to Agriculture, the Corps of Engineers, EPA, Interior, Justice, and OMB for review and comment. With the exception of OMB, the agencies provided us with technical corrections and editorial comments that we have incorporated as appropriate. OMB indicated that it did not have any comments on the draft. In addition, two of the agencies, Agriculture and EPA, provided an overall reaction to the report. Agriculture indicated that the report provides a thorough and reasonable review of the issues regarding the EO’s implementation and that the agency does not disagree with the information presented. Similarly, EPA indicated that it generally agreed with the information provided in the report. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. We will send copies of this report to the Attorney General; the Secretary of Agriculture; the Secretary of the Army; the Administrator, Environmental Protection Agency; the Secretary of the Interior; the Director, Office of Management and Budget; and interested congressional committees. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this report, I can be reached at 202- 512-3841 or mittala@gao.gov. Major contributors to this report are listed in appendix V. The Chairman of the House Subcommittee on the Constitution, Committee on the Judiciary, asked us to provide information on measures taken by the Department of Justice to implement certain provisions of Executive Order 12630 (EO) regarding regulatory takings of private property and the efforts of four agencies—the Department of Agriculture, U.S. Army Corps of Engineers, Environmental Protection Agency, and the Department of the Interior—to comply with the requirements of the EO. Specifically, the Chairman asked us to examine the extent to which (1) Justice has updated its guidelines to implement the EO to reflect changes in case law and issued supplemental guidelines for the four agencies, (2) the four agencies have complied with the specific provisions of the EO, and (3) awards of just compensation have been assessed against the four agencies by the courts for regulatory takings in recent years and, in these cases, whether the agencies assessed the potential takings implications of their actions before implementing them. To report on the extent to which Justice has updated its guidelines and issued supplemental guidance for the four agencies, we obtained copies of these documents and interviewed knowledgeable agency officials. At Justice, these officials included attorneys in the agency’s Environment and Natural Resources Division. At the four agencies, these officials included attorneys in each agency’s legal office (i.e., Office of the Chief Counsel, General Counsel, or Solicitor). We also discussed these matters with officials of the Office and Management and Budget’s Office of Information and Regulatory Affairs. In addition, we conducted legal research and sought the opinions and reviewed the publications of other relevant individuals at the Congressional Research Service; private property rights groups, including the Defenders of Property Rights; environmental groups, including the Georgetown Environmental Law and Policy Institute; and law schools, as to whether changes in takings case law since 1988 warrant revisions to the guidelines. In the course of this work, we identified and summarized key regulatory takings cases heard before the Supreme Court that have been concluded since 1988. Our work may not have identified all such cases. Furthermore, we do not take a position as to whether these cases, individually or collectively, constitute a fundamental change in the body of regulatory takings case law that would trigger the need to update Justice’s guidelines. To determine the extent of the four agencies’ compliance with specific provisions of the EO, we interviewed knowledgeable officials in the legal offices of these agencies and reviewed the documents they provided. These documents included written takings implication assessments of the takings potential of proposed regulatory actions. At each agency we requested examples of these assessments, although we did not ask the agencies to conduct an exhaustive search of their records for these assessments because the agencies generally expressed concerns about the time and resources such a search could require. In addition, the agencies indicated that assessments are not always written or, if written, are not always retained in official files. During the course of our work, we also asked for copies of written assessments associated with specific regulatory takings cases that were concluded with either a settlement or just compensation payment. In addition, we obtained copies of some additional takings implication assessments from Federal Register notices. Furthermore, regarding the agencies’ compliance with specific provisions of the EO, we interviewed Justice and OMB Officials, as appropriate. We also reviewed OMB’s Circular A-11, Preparation and Submission of Budget Estimates, and discussed with OMB officials how the guidance in that circular has changed over time and affected the four agencies’ compliance with the EO. In addition, we reviewed 375 Federal Register notices of proposed and final regulatory actions published in 1989, 1997, and 2002 relating to the four agencies and referencing the EO to determine if and how the agencies documented their compliance with the EO. These years were selected judgmentally: 1989 represents the first full year under the EO, 1997 represents an intermediate year, and 2002 represents the most recent full year. These years also provide 1 year’s experience under each of the past three presidential administrations. Finally, regarding awards of just compensation made against the agencies and, in these cases, whether the agencies had assessed the takings potential of their actions, we obtained from Justice a list of all takings cases related to the four agencies that were concluded during fiscal years 2000 through 2002. We initially sought this type of data for the full 15-year period since the EO’s issuance, but Justice officials indicated that the full set of data was not readily available and would be very labor intensive to provide. We then discussed these cases with relevant officials at the four agencies and analyzed documents they provided. In particular, we focused on cases in which just compensation awards or settlement payments were made, and, for these cases, whether the agencies had assessed the potential takings implications of their actions before implementing them. We also discussed the cases with the Clerk of the U.S. Court of Federal Claims and officials responsible for administering the Department of the Treasury’s Judgment Fund and reviewed documents they provided, in part, to verify the information on the cases with just compensation awards or settlement payments. We conducted our work between October 2002 and September 2003 in accordance with generally accepted government auditing standards. This appendix summarizes regulatory takings cases decided by the U.S. Supreme Court since 1988, the year the EO was issued and the Attorney General promulgated guidelines related to the EO. These cases were cited as being important to the body of relevant case law by legal experts in our interviews with them or in various written products they prepared, including books, law review articles, reports, papers, speeches, or testimonies. The cases discussed are not intended to be an exhaustive list of all such cases. In addition, the appendix discusses certain cases that were decided prior to 1988 because they are referenced in some of the more recent cases discussed below or are cited elsewhere in this report. Tahoe-Sierra Preservation Council, Inc. v. Tahoe Regional Planning Agency, 535 U.S. 302 (2002) Issue: Were two moratoria imposed by the Lake Tahoe Regional Planning Agency compensable takings? Background: The Tahoe Regional Planning Agency issued two ordinances prohibiting all development on vacant lots within residential subdivisions in the Lake Tahoe Basin for a period of 32 months. A group of about 400 individual owners brought suit contending that the ordinances constituted compensable takings. (Subsequent to the landowners bringing suit in 1984, development moratoria continued to prohibit use of many of the parcels; however, the Supreme Court was only asked to address the 32- month moratoria.) Decision: The Supreme Court held that the temporary moratorium on development was not a per se or categorical taking. Instead, the question of whether the Takings Clause of the Fifth Amendment requires compensation when the government enacts a temporary regulation denying a property owner any economic use of his property is to be decided by applying the factors of Penn Central rather than any categorical rule. The Court also stated that First English Evangelical Lutheran Church v. County of Los Angeles (discussed below) concerned the question of whether compensation is an appropriate remedy for a temporary taking, not whether or when such a taking has occurred. Palazzolo v. Rhode Island, 533 U.S. 606 (2001) Issue: Did state denials rejecting developer’s proposals to fill in or build on all or most of a lot, principally consisting of wetlands, cause a taking? Background: A landowner made several applications to the state for a permit to fill 11 acres of wetlands, build 74 houses, or construct a private beach club. The state denied these applications, but informed him that he would be allowed to build at least one house on the property. The landowner estimated that the limitations imposed by the state equated to a 94 percent diminution in value of the property and brought suit, arguing for an extension of the Lucas v. South Carolina Coastal Council (Lucas) test (discussed below) to his situation. Decision: The Supreme Court rejected extending Lucas to a situation where there had been less than a complete denial of the economically viable use of the property. The Court noted that the ability to build a house on the property was of significant worth. The Court remanded the case back to state court for evaluation under the Penn Central test. The Court also ruled that the acquisition of title after the effective date of the regulation that was the basis for the regulatory takings claim did not bar the claim. City of Monterey v. Del Monte Dunes at Monterey, Ltd., 526 U.S. 687 (1999) Issues: Was it proper to submit the determination of a city’s liability for a regulatory taking to a jury and did the rough-proportionality standard of Dolan v. City of Tigard (Dolan) (discussed below) apply to challenges based on denial of development? Background: Del Monte Dunes and its predecessor landowner sought to develop an oceanfront parcel of land within the jurisdiction of the city of Monterey. The city, in a series of repeated rejections, denied proposals to develop the property, each time imposing more rigorous demands on the developers. The property owner brought a civil rights suit against the city alleging, among other things, that the rejections had effected a regulatory taking. The case was tried before a jury, which ruled in favor of Del Monte Dunes. Decision: The Supreme Court ruled that the issues of whether the city’s repeated rejections of the property owner’s development proposals deprived the owner of all economically viable use of the owner’s property and whether the city’s decision to reject Del Monte Dunes’ development plan was reasonably related to a legitimate public purpose were factual questions for a jury to resolve. The Court also stated that the “rough proportionality” standard of Dolan did not apply. Dolan dealt with situations in which land-use decisions condition approval of development on the dedication of property to public use. The Court held that Dolan did not apply to the present case in which the landowner’s challenge was based on denial of development. Suitum v. Tahoe Regional Planning Agency, 520 U.S. 725 (1997) Issue: Was a landowner’s regulatory taking claim ripe for adjudication? Background: A landowner claimed that the Tahoe Regional Planning Agency committed a regulatory taking when it determined that the landowner’s undeveloped residential lot near Lake Tahoe was ineligible for development. However, the planning agency had indicated that the landowner was entitled to receive certain “Transferable Development Rights” that she could sell to other landowners with the agency’s approval. The landowner did not seek those rights but instead brought an action for just compensation for the agency’s alleged taking of her property. In response, the planning agency claimed that the landowner’s takings claim was not ripe because she failed to apply to transfer her development rights, and thus, the amount of her takings claim could not be determined. Decision: The Supreme Court ruled that the planning agency had made a final decision in determining that the landowner’s property was ineligible for development, and thus, her claim was ripe for adjudication. The Court reasoned that the valuation of the landowner’s transfer rights is simply an issue of fact about possible market prices and went to the issue of how much just compensation was owed, not whether there had been a taking. The Court discussed Agins v. City of Tiburon (discussed below), in which it held that because the owners who were challenging ordinances restricting the number of houses they could build on their property had not submitted a plan for development of their property, there was no concrete controversy regarding the application of the specific zoning provisions. Dolan v. City of Tigard, 512 U.S. 374 (1994) Issue: The Court stated that it granted certiorari to resolve a question left open by its decision in Nollan v. California Coastal Commission (discussed below): What is the required degree of connection between the exactions imposed by the city and the projected impacts of the proposed development? Background: A landowner applied to the city of Tigard for a permit to redevelop her plumbing and electrical supply store site. As a condition of granting the landowner’s permit application, the city required the landowner to dedicate a portion of her property as a public greenway to minimize flooding and to dedicate an additional portion of her land as a pedestrian/bicycle pathway to reduce traffic congestion, in accordance with the city’s land use plan. The landowner challenged the dedication requirements on the grounds that they were not related to the proposed development and, therefore, constituted an uncompensated taking of her property under the Fifth Amendment. Decision: The Supreme Court found that preventing flooding and reducing traffic congestion were legitimate public purposes and that there was a nexus between the conditions imposed by the city and these purposes. The Supreme Court then applied a “rough proportionality” test, stating that the city has the burden of establishing the constitutionality of its conditions by making an “individualized determination” that the conditions in question were proportional to the stated purposes. The Court ruled that the city’s dedication requirements constituted an uncompensated taking of the landowner’s property because the city had failed to show either the need for a public, as opposed to a private, greenway or that the additional number of vehicle and bicycle trips generated by the proposed development was reasonably related to the city’s requirement for a dedicated pedestrian/bicycle path. Lucas v. South Carolina Coastal Council, 505 U.S. 1003 (1992) Issue: Is a government regulation of land that completely eliminates its economic use a compensable taking? Background: A landowner bought two residential lots on a South Carolina barrier island, intending to build single-family homes. Subsequently, the state enacted a statute that barred him from erecting permanent habitable structures on the land. The landowner filed suit in state court, claiming that the law caused a taking of his property without just compensation. The South Carolina trial court found that the statute rendered the landowner’s parcel valueless, and awarded compensation. The South Carolina Supreme Court reversed the award of compensation, holding that, under previous U.S. Supreme Court cases, when a regulation is designed to prevent “harmful or noxious uses” of property akin to public nuisances, no compensation was due the landowner, regardless of the regulation’s effect on the property’s value. Decision: The Court reversed the South Carolina Supreme Court’s decision, ruling that the state court erred in applying the “harmful or noxious” uses principle to decide this case. The Court stated that regulations that deny the property owner all “economically viable uses of his land” constitutes a per se, or categorical, regulatory taking that requires compensation, without inquiring into the public interest advanced in support of the restraint. However, the Court also noted that no taking has occurred if the state law simply makes explicit the limitations on land ownership already existing as a result of the background principles of a state’s law of property and nuisance. The Supreme Court remanded the case for the South Carolina court to determine whether these principles would have prohibited the landowner from building on his property. Nollan v. California Coastal Commission, 483 U.S. 825 (1987) Issue: Was there a nexus between the condition on the requested permit and a legitimate state government purpose of protecting the public view of a beach? Background: The California Coastal Commission demanded a lateral public easement across the Nollans’ beachfront lot in exchange for a permit to demolish an existing bungalow and replace it with a three-bedroom house. The public easement was designed to connect two public beaches that were separated by the Nollan property. The Coastal Commission had asserted that the public easement condition was imposed to promote the legitimate state interest of diminishing the “blockage of the view of the ocean” caused by construction of the larger house. Decision: The Court found that there had been a taking, as it found no “essential nexus” between the government’s purpose and its condition on construction that required the property owners to grant an easement allowing the public access to their beachfront. The Court ruled that while the Coastal Commission could have required that the Nollans provide a viewing spot on their property for passersby, there was no nexus between visual access to the ocean and a permit condition requiring lateral public access along the Nollans’ beachfront lot. First English Evangelical Lutheran Church v. County of Los Angeles, 482 U.S. 304 (1987) Issue: Did an interim ordinance prohibiting construction of any structures in a flood zone cause a temporary taking of property requiring compensation? Background: A church purchased a 21-acre parcel of land located in a canyon along the banks of a river that is a natural drainage channel for a watershed area. The church operated a campground on the site. Flooding destroyed the campground and its buildings. In response to the flooding of the canyon, the County of Los Angeles adopted an interim ordinance that prohibited construction in an interim flood protection area, including the site on which the campground had stood. The church filed suit, seeking just compensation for loss of the use of the campground. Decision: The Court ruled that even if a regulation that has been found to result in a taking is repealed or invalidated the government must pay just compensation for the interim period that the regulation was in effect. Agins v. City of Tiburon, 447 U.S. 255 (1980) Issue: Did a zoning ordinance limiting the number of houses that landowners could build on their property cause a taking? Background: The landowners acquired 5 acres of unimproved land for residential development in Tiburon, California. Subsequently, the city adopted two ordinances that modified existing zoning requirements. The density restrictions under the ordinances permitted the landowners to build between one and five single-family residences on their 5-acre tract. The landowners did not seek approval to develop their land, and instead brought suit for just compensation. The complaint alleged that their land had greater value than other suburban property in California due to the scenic views, and that the ordinances destroyed the value of their property. Decision: The Court held that the zoning ordinance on its face did not cause a taking. The court stated that the ordinance was rationally related to the legitimate public goal of open-space preservation, the ordinance benefits property owners as well as the public, and the landowners may still be able to build up to five houses on a lot. The Court also found that because the landowners had not submitted a plan for development of their property, there was no concrete controversy regarding the application of the specific zoning provisions. Penn Central Transportation Co. v. City of New York, 438 U.S. 104 (1978) Issue: Did the city’s use of a historic preservation ordinance to block construction of an office tower atop a designated historic landmark cause a taking? Background: The Landmark Preservation Commission denied Penn Central permission to build a multistory office building above Grand Central Station in New York City. Penn Central alleged the regulation took its property. Decision: The Court ruled that there had been no taking of property. In evaluating the case, the Court set forth a three-pronged test for determining whether a government regulation has resulted in a taking: (1) the character of the governmental actions; (2) the economic impact of the action on the property owner; and (3) the extent to which the regulation has interfered with the distinct, investment-backed expectations of the owner. Pennsylvania Coal Co. v. Mahon, 260 U.S. 393 (1922) Issue: Did a state law barring coal mining that might cause subsidence of overlying land result in a taking of private property in a case where the mineral estate owner is different from the surface estate owner? Background: A coal company conveyed the surface ownership of its property and retained the right to remove coal from the subsurface. Subsequently, a state law was enacted, forbidding the mining of coal in such a way as to cause the subsidence of housing in situations where the surface and subsurface ownership belong to different parties. As a result, the coal company was unable to exercise its right to remove the coal. Decision: The Court held that a taking occurred. The Court stated “while property may be regulated to a certain extent, if regulation goes too far it will be recognized as a taking.” The Court reasoned that the extent of the taking under the state law—abolishing the right to mine coal, which it deemed “a very valuable estate”—was great. Moreover, because the state law applied only where surface and subsurface land is in different ownership, it benefits a narrow private interest rather than a broad public one. In addition to the individuals named above, Doreen S. Feldman, James K. McDowell, Jonathan S. McMurray, John P. Scott, and Timothy W. Wexler made key contributions to this report. Kathleen A. Gilhooly and Lisa M. Wilson also made important contributions. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | Each year federal agencies issue numerous proposed or final rules or take other regulatory actions that may potentially affect the use of private property. Some of these actions may result in the property owner being owed just compensation under the Fifth Amendment. In 1988 the President issued Executive Order 12630 on property rights to ensure that government actions affecting the use of private property are undertaken on a well-reasoned basis with due regard for the potential financial impacts imposed on the government. GAO was asked to provide information on the compliance of the Department of Justice and four agencies--the Department of Agriculture, the Army Corps of Engineers, the Environmental Protection Agency, and the Department of the Interior--with the executive order. Specifically, GAO examined the extent to which (1) Justice has updated its guidelines for the order to reflect changes in case law and issued supplemental guidelines for the four agencies, (2) the four agencies have complied with the specific provisions of the executive order, and (3) just compensation awards have been assessed against the four agencies in recent years. We provided the agencies with a draft of this report for comment. They provided technical and editorial suggestions that we incorporated as appropriate. Justice has not updated the guidelines that it issued in 1988 pursuant to the executive order, but has issued supplemental guidelines for three of the four agencies. The executive order provides that Justice should update the guidelines, as necessary, to reflect fundamental changes in takings case law resulting from Supreme Court decisions. While Justice and some other agency officials said that the changes in the case law since 1988 have not been significant enough to warrant a revision, other agency officials and some legal experts said that fundamental changes have occurred and that the guidelines should be updated. Justice issued supplemental guidelines for three agencies, but not for Agriculture because of unresolved issues such as how to assess the takings implications of denying or limiting permits that allow ranchers to graze livestock on federal lands managed by Agriculture. Although the executive order's requirements have not been amended or revoked since 1988, the four agencies' implementation of some of these requirements has changed over time as a result of subsequent guidance provided by the Office of Management and Budget (OMB). For example, the agencies no longer prepare annual compilations of just compensation awards or account for these awards in their budget documents because OMB issued guidance in 1994 advising agencies that this information was no longer required. According to OMB, this information is not needed because the number and amount of these awards are small and the awards are paid from the Department of the Treasury's Judgment Fund, rather than from the agencies' appropriations. Regarding other requirements, agency officials said that they fully consider the potential takings implications of their regulatory actions, but provided us with limited documentary evidence to support this claim. For example, the agencies provided us with a few examples of takings implications assessments because, agency officials said, these assessments are not always documented in writing or retained on file. In addition, our review of the agencies' rulemakings for selected years that made reference to the executive order revealed that relatively few specified that a takings implication assessment was done and few anticipated significant takings implications. According to Justice, 44 regulatory takings lawsuits brought against the four agencies by property owners were concluded during fiscal years 2000 through 2002, and of these, 14 cases resulted in just compensation awards or settlement payments totaling about $36.5 million. The executive order's requirement for assessing the takings implications of planned actions applied to only three of these cases. The actions associated with the other 11 cases either predated the order's issuance or were otherwise excluded from the order's provisions. The relevant agency assessed the takings potential of its action in only one of the three cases subject to the order's requirements. According to Justice, as of the end of fiscal year 2002, 54 additional regulatory takings lawsuits involving the four agencies were pending resolution. |
DOD has implemented a number of initiatives to generate savings from reductions in its civilian and contract workforces in recent years. For example, in August 2010, the Secretary of Defense directed DOD to undertake department-wide efficiency initiatives to reduce duplication, overhead, and excess across the department. Among other things, the efficiency initiatives specified that DOD should freeze (or cap) the civilian workforce at the fiscal year 2010 levels for fiscal years 2011 through 2013. In 2012 the Senate Committee on Armed Services cited the need to maintain the appropriate balance between the civilian and contract workforce and to achieve the expected savings from reductions in both workforces, and Congress enacted provisions to limit DOD’s service contracts. Section 808 of the NDAA for Fiscal Year 2012 limited DOD’s total obligations for contract services in 2012 and 2013 to the amount requested for these services in the fiscal year 2010 President’s Budget Request. The limit does not apply to contract services for military construction, research and development, and services funded for overseas contingency operations. Additionally, it provides for two adjustments to the spending limit above fiscal year 2010 budgeted levels. DOD may adjust contract services spending above 2010 levels to account for (1) funding increases associated with contract services that were transferred from overseas contingency operations to the base budget and (2) the cost of additional civilian personnel positions over fiscal year 2010 levels. As shown in table 1, DOD identified an aggregate spending limit of $56.47 billion for fiscal year 2012 and $57.46 billion for fiscal year 2013. The spending limit identified in the Act applied to the entire department; therefore, components could exceed their individual targets but DOD would still be in compliance with the law if total spending for contract services across the entire department was less than the aggregate spending limit. Section 808 contract services spending limits ended after fiscal year 2013, but Congress extended the spending limit through fiscal While year 2014 in section 802 of the NDAA for Fiscal Year 2014.spending limit requirements currently expire at the end of fiscal year 2014, draft legislation contains a provision to extend the spending limit requirement through fiscal year 2015. Congress has also enacted legislation to improve the availability of information on DOD’s acquisition of services and to help the department make more strategic decisions about the right workforce mix of military, civilian, and contractor personnel. In fiscal year 2002, Congress enacted section 2330a of Title 10 of the U.S. Code, which required the Secretary of Defense to establish a data collection system to provide management information on each purchase of services by a military department or defense agency. In 2008, Congress amended section 2330a of Title 10 of the U.S. Code to require the Secretary of Defense to submit an annual inventory of contracted services performed for or on behalf of DOD during the preceding fiscal year. This annual inventory submission includes, among other things, the number of contractor full time equivalents and the associated direct labor cost for these positions. Following the submission of the inventory, the secretaries of the military departments and heads of the defense agencies are to complete a review of the contracts identified in the inventory to ensure, among other things, that the activities do not include inherently governmental functions—which are those that require discretion in applying government authority—such as the determination of budget policy. The review should also ensure that to the maximum extent practicable, the activities do not include any closely associated with inherently governmental functions, which are those that may be at risk of becoming inherently governmental due to the manner in which the contractor performs the work, among other things. Upon completion of this review, the secretaries of the military departments and heads of the defense agencies submit a certification letter to the Office of Personnel and Readiness that outlines the results and any corrective actions to be taken to ensure that contractors are not performing inherently governmental functions and to monitor the use of contractors for closely associated with inherently governmental functions. Section 808 of the NDAA for Fiscal Year 2012 further reinforced these requirements by instructing DOD to issue guidance requiring the components to reduce funding by 10 percent for fiscal years 2012 and 2013 for contracts identified with personnel performing closely associated with inherently governmental functions. Section 808 also instructed DOD to establish guidance to conduct a reduction of funding by 10 percent for fiscal years 2012 and 2013 for contracts identified with personnel performing on staff augmentation contracts, which it identifies, in relevant part, as contracts for personnel who are subject to the direction of a government official other than the contracting officer for the contract. Unlike the aggregate spending limit, the statutory requirement for guidance on reductions in funding for closely associated with inherently governmental functions and staff augmentation are directed to each component; therefore, the reductions are expected to take place at each component, rather than an aggregate reduction across the department. The section 808 requirement to reduce funding for closely associated with inherently governmental functions and staff augmentation expired in September 2013; however, Congress modified the requirements in section 802 of the NDAA for Fiscal Year 2014, extending the time period for DOD to implement the full 20 percent reduction for both the closely associated with inherently governmental and staff augmentation functions through fiscal year 2014. The fiscal year 2014 period is also referred to as a carryover year—whatever required reductions that DOD did not take in fiscal years 2012 and 2013 are required to be taken in 2014. While implementing both the civilian and contract services limitations, the department faced uncertainty about funding levels associated with the automatic, across-the-board cancellation of budgetary resources, known as sequestration. sequestration of budgetary resources, resulting in a $37 billion reduction in DOD’s discretionary budget, which includes funding for contract services. As we reported in June 2014, the department implemented an administrative furlough of the civilian workforce to help achieve these reductions, but contract services were not subject to these furloughs and DOD continued to use contracted support under existing contracts. Sequestration was a result of the Budget Control Act of 2011 (Pub. L. No. 112-25 (2011), as amended). The Budget Control Act of 2011, as implemented by the Office of Management and Budget, required spending cuts of $37 billion from DOD’s budget in fiscal year 2013 through across-the-board, proportional reductions in funding provided in the appropriations acts for most defense accounts, including accounts related to DOD’s civilian workforce and contracted services. DOD exceeded its spending limit by $1.72 billion in fiscal year 2012 and spent approximately $500 million less than its limit in fiscal year 2013. However, DOD reported spending $1.34 billion more than the limit in fiscal year 2012 and $1.81 billion less than its limit in fiscal year 2013 because the DOD Comptroller’s office—responsible for calculating DOD spending limits and setting spending targets—inconsistently calculated exclusions from the contract services spending limits. Varied implementation of fiscal controls hampered military department efforts to adhere to the spending limits. In fiscal year 2012, DOD exceeded the spending limit because each of the military departments exceeded their respective spending targets. Military department budget officials explained that they took limited steps to adhere to spending targets in fiscal year 2012 due to late guidance from the Office of the Deputy Secretary of Defense. After exceeding the spending targets in fiscal year 2012, some components improved planning and implemented stronger fiscal controls over contract services, such as monitoring spending during the year and prioritizing mission needs to assist in funding decisions, helping DOD meet its spending limit for fiscal year 2013. DOD reported spending more than its identified limit on contract services in fiscal year 2012 by $1.34 billion and less than its limit in fiscal year 2013 by $1.81 billion. However, the DOD Comptroller’s office inconsistently calculated adjustments by excluding certain categories of expenditures from the spending limit. By doing so, DOD overstated its calculated spending limit of $56.47 billion by approximately $400 million in 2012 and its spending limit of $57.46 billion by $1.31 billion in 2013, as indicated in figure 1. In addition to the transfer of contract services funding from overseas contingency operations, DOD’s calculation of the spending limit consists of two primary elements: (1) the funding of contract services categories identified in the 2010 President’s budget request and (2) the cost of increases in the civilian workforce over 2010 levels. The NDAA for Fiscal Year 2012 permits DOD to exclude spending for military construction, research and development, and services funded for overseas contingency operations in determining its spending limit. DOD’s June 2012 guidance instructs the components to exclude these services, but also permits excluding other services from federal sources and medical care, which are not specifically identified for exclusion in the law. A Comptroller official said that DOD excluded other services from federal sources because this category includes services purchased on behalf of other federal agencies, such as through the use of interagency agreements, in addition to DOD purchases. The official indicated that DOD was unable to distinguish between services purchased for other federal agencies and those purchased for DOD and therefore excluded the entire category. Additionally, the Comptroller official explained that the exclusion of contracted medical care from the spending limit was done to ensure that medical care was not reduced for service members. Moreover, the DOD Comptroller’s office included approximately $248 million in research and development funds in the spending limit for both fiscal year 2012 and 2013, while excluding all actual research and development spending from its calculation of adherence to the limit for fiscal year 2013. As a result of this error, DOD overstated the limit by $248 million. The Comptroller’s office acknowledged this inclusion as a coding error and plans to appropriately exclude research and development expenditures from its spending limit in future years. In addition to excluding certain services from the spending limit, section 808 also permits DOD to increase its spending on contract services above 2010 levels to adjust for cost increases associated with its civilian workforce. However, our analysis found that the DOD Comptroller office’s calculation for the civilian workforce adjustment was not consistently applied. DOD excluded civilian personnel performing research and development, military construction, and a portion of its civilian personnel that are funded from other federal sources from its adjustment for increases in civilian personnel costs and it also excluded similar contract services when determining the spending limit. By contrast, DOD did not remove civilian personnel providing medical care from the adjustment for increases in civilian personnel costs; yet it excluded contract services for medical care from the determination of the spending limit. A DOD Comptroller official explained that a portion of contract services associated with medical care, such as management support, was included in the spending limit, because the Comptroller’s office could not separate out the corresponding civilian pay adjustments associated with these personnel. Therefore, the Comptroller’s office decided to include all civilian medical related personnel, which accounted for nearly half of the increase in the civilian workforce each year, in the calculation of increases in the civilian workforce. By consistently applying DOD’s exclusions for these civilian personnel, we found that DOD overstated the spending limit by approximately $600 million in fiscal year 2012 and $1.1 billion in fiscal year 2013. Inconsistencies in accounting for both research and development and the calculation of civilian workforce increases resulted in DOD’s aggregate spending limit being overstated by roughly $400 million in fiscal year 2012 and $1.31 billion in fiscal year 2013. As a result, DOD’s reported spending over the limit would increase from $1.34 billion to $1.72 billion in fiscal year 2012. Similarly, DOD’s reported adherence to the cap in fiscal year 2013 would be reduced from $1.81 billion to about $500 million. DOD reported exceeding its identified spending limit of $56.47 billion by $1.34 billion for fiscal year 2012. In implementing the limit for fiscal year 2012, DOD issued guidance that set contract services spending targets for each of the components below the aggregate spending limit to allow for unexpected costs that may occur during the year. DOD defense agencies spent under their overall target as a group in fiscal year 2012; however, some agencies, such as DLA, exceeded their individual spending targets. Additionally, all of the military departments exceeded their spending targets, as shown in figure 2. Military departments took limited steps to adhere to spending targets in fiscal year 2012, which some military department budget officials attributed to late guidance from the Deputy Secretary of Defense. The guidance provided each component with a contract services spending target in June 2012, approximately 4 months before the end of the fiscal year, which Army and Air Force budget officials said did not allow enough time to implement spending limits in fiscal year 2012. Despite issuing guidance late in the fiscal year, DOD officials believed the department was on track to meet the aggregate spending limit as of June 2012. However, military department budget officials said that they spent more on contract services in the last quarter of the fiscal year than budgeted due to additional funding for contract services made available through reprogramming, which allows for the shifting of funds for contract services requirements that were not planned for the when the appropriation was made. As shown in figure 3, the military departments have historically increased contract services obligations during the last quarter of the fiscal year. Further, an Army budget official explained that the Army exceeded its fiscal year 2012 target by more than $2 billion due in part to poor budget estimates, which were not informed by the Army’s inventory of contracted services data that indicated spending in excess of the target, and other costs that are not taken into account when budgeting, such as reprogramming. In fiscal year 2013, DOD reported spending $1.81 billion less than its identified spending limit of $57.46 billion. The Deputy Secretary’s June 2012 guidance also set contract services spending targets for each of the components for fiscal year 2013 below the aggregate spending limit to allow for some unexpected costs during the year.spending less than the limit, but adherence to targets varied across the components. Similar to fiscal year 2012, defense agencies as a group spent under their target; however, adherence to targets varied, with some agencies, such as DLA, continuing to exceed their individual spending targets. Adherence to targets by the military departments varied, with the Army exceeding its fiscal year 2013 spending target by $2.69 billion, while the Air Force obligated $2.83 billion less than its target, and the Navy obligated over $500 million less than its target, as shown in figure 4. We found that budget officials from the components that met their spending targets in fiscal year 2013 implemented improved planning and oversight of contract services spending. Improvements included soliciting contract services budget estimates from commands—an organizational sub-unit of a military department or defense agency— during the annual budget process, providing each command with individual contract services spending targets, and monitoring contract services spending during the year to ensure compliance with section 808 spending limits.As shown in table 2, the components we included in our review took varying approaches to manage contract services spending limits. For example, the Air Force Financial Management and Comptroller Office provided each command with a ceiling on their contract services through their annual funding letter. According to Air Force officials, these ceiling amounts were based on planning documents, which included annual budget estimates for contract services, provided by each command prior to the start of the fiscal year. Throughout the year, Air Force Financial Management officials monitored monthly spending reports and communicated with commands to ensure that they adhered to their targets and made adjustments to the allocation of funds among commands when necessary. Additionally, these officials planned for potential reprogramming and reviewed reprogramming actions to ensure that they would not result in the Air Force exceeding its spending target, as it did in fiscal year 2012. Similarly, DTRA spent less than its target in fiscal year 2013, which DTRA Comptroller officials attributed to allocating contract services spending targets among its organizations based on annual budget estimates for contract services and monitoring periodic reports on the execution of spending against these targets. Further, DTRA Comptroller officials stated that they prioritize mission requirements to ensure that the highest priority missions receive contract services funds, while lower priority mission needs may not receive such funds. The Army Budget Office also provided spending targets to each command in fiscal year 2013; however, it did not solicit input from the commands on their spending plans to inform these targets. An Army manpower official said commands have generated contract services spending estimates through the Army’s inventory of contracted services that could have been used by the budget office to inform contract services targets. Further, without incorporating such information from the commands, the Army Budget Office did not prioritize requirements to assist commands’ planning efforts to meet their spending targets. For example, one Army command that we spoke with said it was difficult to meet the spending target without additional guidance to prioritize the types of services that should be reduced or eliminated to meet the target. Instead, these targets were based on each command’s contract services spending in fiscal year 2012. In addition, according to an Army budget official, the Army Budget Office does not typically communicate with commands during the year to monitor spending, which limited the Army’s ability to ensure adherence to the spending target. Similarly, DLA also exceeded their contract services spending target for fiscal year 2013. The DLA financial management official that we spoke with was not aware of the section 808 guidance that set contract services spending targets for each component, and therefore took no action to manage to the spending target identified in the guidance. Standards for Internal Control in the Federal Government call for government agencies to take actions to ensure accountability and stewardship of the government’s resources. In fiscal year 2013, the improved planning and stronger fiscal controls over contract services by the Air Force helped it to spend $2.83 billion less than its target. By contrast, the Army did not take similar actions for contract services and subsequently exceeded its target by more than $2 billion in fiscal year 2013, as it did in fiscal year 2012. Improved planning and consistent implementation of fiscal controls across the department could better enable DOD to manage contract services spending and achieve future savings. Comparable and timely data are not available to determine if DOD implemented the mandated funding reductions for contractor performance of closely associated with inherently governmental functions. DOD’s section 808 guidance instructs the components to rely on the pre-existing inventory process to identify and measure these reductions, but the fiscal year 2011 inventory guidance, issued prior to the enactment of section 808, did not require components to report the obligation data necessary to do so in their review certification letters—documentation of the results of the inventory review that identifies the performance of closely associated with inherently governmental functions. DOD subsequently updated its inventory guidance for fiscal year 2012 to collect obligation data and again for fiscal year 2013 to require components to report on how the section 808 required reductions were achieved in fiscal years 2012 and 2013. However, two years of obligation data will not be available until after the statutory requirement has expired in September 2014. Section 808 requires the Secretary of Defense to issue guidance to the components to implement reductions in funding for closely associated with inherently governmental functions by 10 percent in fiscal years 2012 and 2013. DOD issued guidance in June 2012, which instructed components to use the information reported in the fiscal year 2011 inventory as the baseline for the 10 percent funding reduction. However, the 2011 inventory guidance was issued prior to the passage of section 808 and therefore did not call for reporting the necessary obligation data to establish a baseline for these reductions.components—Army and Air Force—that submitted inventory review certification letters reported obligations for closely associated with Two of the 29 inherently governmental functions for fiscal year 2011. DOD updated its guidance for the fiscal year 2012 inventory review to require components to report more detailed information on closely associated with inherently governmental functions and as a result 13 components identified such obligation data in 2012. However, the Air Force did not complete an inventory review in 2012 and the Army was the only component that reported obligations associated with closely associated with inherently governmental functions for both the 2011 and 2012 fiscal years. Without obligation data for closely associated with inherently governmental functions from the other components in their 2011 inventory review letters, DOD does not have the data necessary to determine the funding amount to meet the 10 percent reductions for fiscal years 2012 and 2013. Although the Army is the only component to report obligation data for closely associated with inherently governmental functions in fiscal years 2011 and 2012, we found that these data are not comparable due to changes in selection methodology. The Army reported $8.5 billion in these obligations in its fiscal year 2011 inventory review and issued guidance instructing each command to reduce their obligations associated with these functions by 10 percent. In fiscal year 2012, the Army reported $4.5 billion in obligations associated with closely associated with inherently governmental functions, showing a reduction of nearly 50 percent when compared to the obligations reported in 2011. However, Army manpower officials were not able to identify how these reductions were achieved, but explained that their 2012 review certification letter did not include complete input from all commands. For example, the command that accounted for the largest reduction in these functions from 2011 to 2012 attributed it to the transfer of responsibility for these functions to another command. The command that assumed responsibility for these functions did not include them in its 2012 inventory review and as a result these previously identified closely associated with inherently governmental functions were not accounted for in the Army’s 2012 inventory review certification letter. Moreover, while components are improving their annual inventories each year to report more detailed information on closely associated with inherently governmental functions, Personnel and Readiness officials said that data collected through the inventory may not be comparable from year to year due to changes in methodology. For example, DOD’s guidance for the fiscal year 2011 inventory review instructed components to review at least 50 percent of the contract actions reported in the inventory to identify these functions, while guidance for fiscal year 2012 called for a review of 80 percent of contract functions. Further, officials from the components reported various interpretations of the 80 percent review guidance. For example, the Army and DLA reported reviewing 80 percent of the contract dollar amounts identified in their inventory, while DTRA reported reviewing 80 percent of the contract awards or modifications. In addition, the fiscal year 2013 guidance does not specify the percent of contract actions or percent of total dollar amounts that should be reviewed for 2013 and as a result continues to limit comparison of data collected across fiscal years. In November 2014, we recommended that DOD update its annual inventory review guidance to clarify this review requirement and DOD agreed to update its guidance for future years. The Office of Readiness and Force Management issued additional guidance in May 2014 requiring components to identify the steps taken to implement funding reductions in closely associated with inherently governmental functions in fiscal years 2012 and 2013. If components did not achieve the full 20 percent reduction for fiscal years 2012 and 2013, they were also instructed to identify any additional carryover reductions to be taken in fiscal year 2014 to achieve the full 20 percent reduction, as required by the NDAA for Fiscal Year 2014. However, these carryover reduction amounts will not be identified until fiscal year 2015, after the statutory requirement to implement these additional reductions in 2014 has expired. In addition, it is unclear what data will be reported to demonstrate compliance with section 808 given the lack of data from 2011 to establish a baseline for reductions and the differing selection methodology used each year to identify these functions. We will assess the data reported in the fiscal year 2013 inventory review certification letters when they become available and report on results in fiscal year 2015. Given the lack of comparable inventory data, officials at some DOD components that we spoke with identified other data sources to measure these reductions during the fiscal year 2013 inventory review. For example, DTRA officials relied on actual expenditures reported for advisory and assistance services—a subset of contract services recorded in the department’s financial system—to show reductions in obligations for closely associated with inherently governmental functions. According to these officials, they relied on the advisory and assistance services category due to the similarities between closely associated with inherently governmental functions and the types of services captured by the category, such as analyses or evaluations that support budget and acquisition decisions. Further, advisory and assistance services have been recorded in the annual budget since 1994, allowing DTRA officials to budget a reduction and track spending in the category. Personnel and Readiness officials agreed that the advisory and assistance budget category provides an alternative to measure reductions, but noted that these data have their own limitations. For example, we found in 2008 that the identification of advisory and assistance services is subjective and agencies experienced challenges linking obligations reported for these categories to specific contracts to provide oversight. Based on the challenges presented by the currently available data sources, the Personnel and Readiness officials said that DOD does not currently have the tools in place to measure funding reductions for specific contract functions. Nevertheless, data collected through other available sources may help DOD corroborate data obtained from prior inventory reviews and assist in validating whether funding reductions for closely associated with inherently governmental functions have been achieved for fiscal years 2012 and 2013. DOD has not yet determined if funding reductions in staff augmentation— contractors under the direction of a government official other than a contracting officer—were implemented due to insufficient guidance and management attention. Section 808 instructs DOD to issue guidance to the components to implement a 10 percent reduction in funding for staff augmentation contracts and to identify responsible management officials to ensure that reductions are achieved. DOD’s section 808 guidance, issued in June 2012, instructs each component to identify responsible management officials to ensure that section 808 requirements, including staff augmentation reductions, are met, but officials from only one of the five components that we spoke with were able to clearly identify an official responsible for implementing staff augmentation funding reductions. The June 2012 guidance also identifies a number of officials, including Comptroller and Personnel and Readiness officials, as points of contact for questions on the implementation of the guidance. However, in speaking with these officials, none considered themselves responsible for oversight to ensure implementation of reductions in staff augmentation. In the absence of this oversight, officials at some components stated that they had not measured reductions in staff augmentation funding because they had not been directed on how to report the results. DOD’s section 808 guidance also lacked clarity in how reductions in staff augmentation funding should be implemented and measured. The guidance notes that these funding reductions were factored into budget requests for fiscal years 2012 and 2013, but does not specify the amounts of these budgeted reductions or the data source that should be used to determine if the reduction was achieved. In response to section 802 of the NDAA for Fiscal Year 2014 that requires DOD to implement reductions in 2014 if they were not achieved in 2012 and 2013, DOD issued supplemental guidance in May 2014 instructing components to report on actions taken to implement staff augmentation reductions in their fiscal year 2013 inventory review certification letters. However, this guidance did not provide any direction to the components on how to apply the statutes definition of staff augmentation or the data that should be used to measure compliance with the requirement. As a result, components that we spoke with provided varying interpretations of how to report on the staff augmentation requirement and were still determining how to report on these reductions in their 2013 inventory review certification letters. For example, Army manpower officials planned to use a combination of inherently governmental and authorized and unauthorized personal services contractor data reported through the inventory review process. DTRA interpreted the definition of staff augmentation contained in the law as synonymous with closely associated with inherently governmental functions and measured the reduction using the advisory and assistance services category tracked through the department’s financial system. DLA officials planned to identify staff augmentation funding using select product service codes for professional, administrative, and management support services from the Federal Procurement Data System-Next Generation. However, they noted that it would be challenging to manually verify if all contracts identified were in fact for staff augmentation services. The results of the 2013 inventory review will not be reported until fiscal year 2015, after the statutory requirement to implement reductions has expired. As a result, if DOD components identify any additional reductions needed to comply with section 808 they will have to be implemented outside of the timeframes specified in section 808. In addition, as shown above, the methods used to measure reductions will likely vary among the components. We will assess the fiscal year 2013 inventory review certification letters when they become available and report on the results in 2015. DOD has not fully implemented the steps necessary to effectively manage the section 808 limitations on contract services required by law. By inconsistently excluding categories of services and overestimating the allowable spending, DOD did not accurately measure compliance with contract services spending limits. Implementation of improved fiscal controls by the Air Force helped DOD to better manage contract services spending in fiscal year 2013, but wider use of effective fiscal controls by all defense components could help DOD realize intended efficiencies and effective management of contract services spending. The significant discrepancies among the military departments’ adherence to the contract services spending targets signal that more could be done to ensure that the department has the information necessary to budget and manage contract services spending. Moreover, in the absence of the data necessary to reliably measure reductions in funding associated with closely associated with inherently governmental functions and staff augmentation contracts, DOD is not a position to know whether required reductions have been achieved. As a result, DOD may need additional time to determine whether those reductions have been implemented. To ensure that DOD takes action to implement required funding reductions in closely associated with inherently governmental functions and staff augmentation contracts, Congress should consider extending the time period for DOD to achieve the reductions. To ensure the management of the required portfolio of contract services and that required reductions are achieved we recommend that the Secretary of Defense take the following four actions: Ensure that the Comptroller updates the department’s methodology for determining compliance with the aggregate spending limit for 2014 to: Consistently calculate the civilian personnel adjustment to take into account any categories of services excluded from the spending limit. Adjust the spending limit to exclude research and development obligations from both the limit and actual expenditures as required. Evaluate fiscal controls used by the military departments to identify effective practices and ensure they are consistently implemented to improve the management of contract services spending. Given the limitations of the data available from the inventory of contracted services for fiscal years 2011 and 2012, direct the Office of Personnel and Readiness to identify additional data sources to corroborate data with that reported in the fiscal year 2013 inventory to help ensure funding reductions called for in the law are implemented. We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix II, DOD concurred with the four recommendations. The Department concurred with our first recommendation to consistently calculate the civilian personnel adjustment and stated that it plans to reevaluate the civilian personnel adjustment to account for categories of services excluded from the spending limit in the future. In response to the second recommendation, DOD agreed to adjust the spending limit to exclude research and development obligations from the limit and actual expenditures. DOD concurred with the third recommendation to evaluate fiscal controls and the fourth recommendation to identify data sources that corroborate inventory data, but did not provide any further details on the implementation plans for these actions. We also received technical comments from DOD, which were incorporated as appropriate. We are sending copies of this report to the Secretary of Defense and interested congressional committees. In addition the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202)512-4841 or makm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objectives for this review were to determine the extent to which the Department of Defense (DOD) implemented the requirements of section 808 of the National Defense Authorization Act (NDAA) for Fiscal Year 2012, in fiscal years 2012 and 2013 to (1) limit its service contract spending, (2) reduce funding for closely associated with inherently governmental functions by 10 percent each year, and (3) reduce funding for staff augmentation contracts by 10 percent each year. To determine the extent to which DOD implemented the service contract spending limit in fiscal years 2012 and 2013, we reviewed relevant laws and DOD guidance, analyzed Office of the Under Secretary of Defense (OUSD) Comptroller data, and interviewed DOD budget officials. Specifically, we reviewed DOD’s section 808 guidance, issued in June 2012, and compared this guidance to the law. Further, we reviewed the Comptroller’s methodology for calculating the spending limit by analyzing contract services budget and funding data— categorized as object class code 25 by the Office of Management and Budget Circular No. A-11. To ensure that the total contract services spending data provided by the Comptroller included all contract services expenditures by the department, we compared the Comptroller data to contract services spending reported in the Federal Procurement Data System-Next Generation and found that the data were within a reasonable range and sufficiently reliable for our purposes. To determine the steps taken by individual DOD components to implement controls over contract services spending, we interviewed and collected information from budget officials at the OUSD Comptroller’s Office, the military departments, the Defense Threat Reduction Agency (DTRA) and the Defense Logistics Agency (DLA), which reported the highest obligations for closely associated with inherently governmental functions among the defense agencies in the fiscal year 2012 inventory. To assess the extent to which DOD components reduced funding for closely associated with inherently governmental functions by 10 percent in fiscal year 2012, we reviewed relevant laws, guidance, and data from the inventory of contracted services certification review letters for fiscal years 2011 and 2012, the most recent data available when our review was initiated. We reviewed DOD’s section 808 guidance, issued in June 2012, which identified fiscal year 2011 inventory of contracted services data as the basis to measure reductions in fiscal years 2012 and 2013. We also reviewed DOD’s annual inventory guidance for fiscal years 2011 through 2013 to determine if the information necessary to measure section 808 compliance was required by the guidance. To identify the data available to establish a baseline for the required funding reductions in closely associated with inherently governmental functions, we reviewed prior GAO work on DOD’s fiscal year 2011 inventory and reviewed the certification letters submitted by 29 components for the fiscal year 2011 review. In addition, we analyzed the review certification letters submitted by 32 DOD components for the fiscal year 2012 inventory review and compared these letters to those submitted for 2011 to determine if components reported relevant data on funding for closely associated with inherently governmental functions to measure reductions. In addition, we interviewed officials responsible for compiling and reviewing the inventory data at the departments of the Army, Navy, and Air Force, and two selected DOD agencies—DLA and DTRA—that reported the highest obligations for closely associated with inherently governmental functions among the defense agencies in the fiscal year 2012 inventory. As the Army was the only component to identify obligations for closely associated with inherently governmental functions in both years, we interviewed officials from selected Army Commands—the Army Installation Command, Army Materiel Command and Acquisition Support Center—whose data showed the largest change in closely associated with inherently governmental functions from 2011 through 2012. To assess the extent to which the components implemented the required reduction in funding for staff augmentation contracts by 10 percent each year, we reviewed relevant laws and guidance and interviewed officials from OSD, the military departments and selected defense agencies. Specifically, we reviewed DOD’s section 808 guidance, issued in June 2012, to determine the steps taken by DOD to implement the reduction in staff augmentation funding. Further, we interviewed OSD officials from the office of Defense Procurement and Acquisition Policy, the Office of the Under Secretary for Personnel and Readiness and the office of Cost Assessment and Program Evaluation to identify available data to measure reductions in staff augmentation funding. In addition, we interviewed officials from the military departments, DTRA, and DLA to identify the responsible management official at each component and the steps taken to implement the reduction in staff augmentation funding. We conducted this performance audit from March 2014 to December 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, W. William Russell, Assistant Director; Beth Reed Fritts; Jonathan Munetz; and Suzanne Sterling made significant contributions to this review. In addition, Pete Anderson, Virginia Chanley, Julia Kennon, John Krump, and Ozzy Trevino made key contributions to this report. | In fiscal year 2013, DOD reported spending more than $170 billion on contract services—contractors performing functions such as information technology support or maintenance of military equipment—constituting more than half of DOD's total acquisition spending. The National Defense Authorization Act (NDAA) for Fiscal Year 2012, section 808, limited DOD's contract services spending for fiscal years 2012 and 2013 and required reductions in select contract services. Subsequent revisions to the NDAA extended the spending limits through fiscal year 2014. Congress requested and mandated GAO to review DOD's implementation of the required reductions. This report addresses the extent to which DOD implemented, in fiscal years 2012 and 2013: (1) contract services spending limits, (2) 10 percent funding reductions for closely associated with inherently governmental functions, and (3) 10 percent funding reductions for staff augmentation contracts. GAO reviewed relevant guidance; analyzed DOD financial, inventory, and other contract services data; and interviewed relevant officials. The Department of Defense (DOD) exceeded its identified limit on contract services by $1.72 billion in 2012 and spent $500 million less than the limit in 2013. GAO found that all military departments exceeded their Comptroller-provided spending targets in fiscal year 2012 due to late guidance. In fiscal year 2013, some components improved planning and implemented stronger fiscal controls over contract services, such as monitoring spending during the year, helping DOD meet its limit for fiscal year 2013. However, the Army exceeded its spending target in 2013 due to inaccurate budget estimates and weaknesses in planning by not soliciting inputs on commands' contract services spending plans. Federal internal control standards call for effective control activities that enforce guidance to help ensure stewardship of government resources. Improved planning and consistent implementation of fiscal controls across the department could better position DOD to manage contract services spending. Comparable and timely data are not available to determine if DOD implemented the mandated funding reductions for contracts with closely associated with inherently governmental functions—those that put the government at risk of contractors inappropriately influencing government decisions. DOD's guidance calls for reliance on data from the annual inventory of contracted services—an identification of the number of contractors and associated costs for services provided to DOD—to measure required reductions; however, these data did not include the obligation data needed to measure funding reductions in closely associated with inherently governmental functions. DOD updated its inventory guidance in 2013 to collect such information, but these data will not be comparable to previous years due to changes in methodology and will not be available until fiscal year 2015, after the statutory requirement has expired. Similarly, data are not available to determine if DOD met the required funding reductions for staff augmentation contracts—contractors under the direction of a government official. DOD's guidance did not establish a baseline for staff augmentation or identify the data that should be used to determine if the reductions were achieved. DOD issued supplemental guidance in May 2014 instructing components to report in October 2014 on steps taken to implement these reductions. However, the current statutory requirement expired in September 2014. Congress should consider extending the time period for DOD's implementation of funding reductions in select contract functions. Further, GAO recommends that DOD improve planning and consistently implement fiscal controls to better manage contract services, among other actions. DOD concurred with the recommendations. |
NTSB was established in 1966 as an independent government agency located within the newly formed DOT. In 1974, Congress made NTSB completely separate from DOT. NTSB’s principal responsibility is to promote transportation safety by investigating transportation accidents, determining the probable cause, and issuing recommendations to address safety issues identified during accident investigations. Unlike other transportation agencies, such as the Federal Aviation Administration (FAA), NTSB does not have the authority to promulgate regulations to promote safety, but makes recommendations in its accident reports and safety studies to other agencies that have such regulatory authority. The federal agencies that receive NTSB recommendations include the DOT’s FAA, Federal Highway Administration (FHWA), Federal Motor Carrier Safety Administration (FMCSA), Federal Railroad Administration (FRA), Federal Transit Administration (FTA), National Highway Traffic Safety Administration (NHTSA), Pipeline and Hazardous Materials Safety Administration (PHMSA), and the U.S. Coast Guard. NTSB also makes recommendations to others, such as state transportation authorities and industries. As figure 1 indicates, NTSB has varying degrees of flexibility in its statutory mandate, as it pertains to initiating an investigation. By statute, NTSB has limited discretion in deciding which aviation accidents to investigate and the greatest amount of discretion to investigate highway accidents. NTSB is comprised of a five member board—a chairman, vice chairman, and three members—appointed by the President with the advice and consent of the Senate. The chairman is NTSB’s chief executive and administrative officer. As of March 2006, the board was supported by a staff of 396, which includes 210 investigators assigned to four modal offices—aviation; highway; marine; and rail, pipeline, hazardous materials. (See fig. 2.) The agency is headquartered in Washington, D.C., and maintains 10 field offices nationwide and a training academy in Ashburn, Virginia, in suburban Washington, D.C. In recent years, the agency has shrunk in size due to budget constraints, which it has largely dealt with by using attrition to downsize the staff. In 2003, NTSB had 438 full time employees compared with the current level of 396. During the same period, the number of full-time investigators decreased from 234 to 210. NTSB’s modal offices vary in size, with the aviation office having 125 employees; the rail, pipeline, and hazardous materials office having 38; the highway office having 30; and the marine office having 16 employees as of May 2006. An additional 42 employees work in the Office of Research and Engineering, which provides technical, laboratory, analytical, and engineering support for the modal investigation offices. For example, it is responsible for interpreting data recorders, creating accident computer simulations, and publishing general safety studies. NTSB’s budget increased from $62.9 million in fiscal year 2001 to $76.7 million in fiscal year 2006, or about 22 percent. After adjusting for inflation, this represents an increase of about 9 percent. The President has requested $79.6 million for NTSB in fiscal year 2007. Since 1966, NTSB has investigated over 124,000 aviation accidents and over 10,000 surface transportation accidents. Figure 3 shows the total number of aviation investigations that NTSB has undertaken over the past 6 years and the degree to which NTSB was involved in the investigations. NTSB lacks the resources to conduct on-scene investigations of all aviation accidents. As a result, for general aviation accidents, NTSB delegates the gathering of on-scene information to FAA investigators, as allowed by statute. In these limited investigations, FAA sends the accident information to NTSB, and NTSB then determines a probable cause for the accident. In addition, NTSB participates in the investigations of foreign aviation accidents in conformance with Annex 13 of the International Civil Aeronautics Organization Treaty. These investigations involve a U.S. carrier or U.S.-built aircraft, or occur at the request of a foreign government. NTSB aviation investigators told us that there is often significant value in participating in such investigations; the findings often have safety implications for U.S. carriers, since most foreign airlines use U.S.-made aircraft, engines, and other parts and multiple foreign air carriers operate within the United States. Through our work government wide we have identified a number of key functional areas and leading practices in areas that are important for managing an agency. This testimony focuses on NTSB’s performance in five key functional areas—strategic planning, performance management, human capital, financial management, and communications—and how NTSB’s practices compare to leading practices in those areas. As illustrated in figure 4, NTSB generally is following leading practices in financial management, only minimally following leading practices in strategic planning, and has mixed results for the other functions. Much of NTSB’s progress toward following leading practices is due to recent management initiatives. The report we will be issuing later this year will provide additional information on NTSB’s performance relative to these five management functions, as well as information technology, acquisition management (including the agency’s use of contracting), knowledge management, and capital decisionmaking. The Congress and the President have encouraged better management of federal agencies by means such as results-oriented strategic planning, but NTSB’s strategic plan generally does not follow performance-based practices. Without effective short- and long-term planning, federal agencies risk delivering programs and services that may or may not meet the nation’s most critical needs. The Government Performance and Results Act of 1993 (GPRA) and guidance contained in the Office of Management and Budget’s (OMB) Circular A-11, provide performance-based strategic planning guidelines. GPRA was intended to achieve several broad purposes, including improving federal program effectiveness, accountability, and service delivery, and enhancing congressional decision making by providing more objective information on program performance. GPRA requires federal agencies to develop strategic plans in which they define their missions, establish results-oriented goals, and identify the strategies that will be needed to achieve those goals. For instance, GPRA requires strategic plan updates at least every 3 years, and requires that agencies set objectives and goals that are specific outcomes that the organization wishes to accomplish (called outcome-related objectives). To its credit, in December 2005, NTSB issued a strategic plan for the years 2006 through 2010, which was the first time the agency had a strategic plan in 6 years. In developing that plan, senior agency officials told us that they modeled their plan on examples from other federal agencies with similar structure and mission, such as the Federal Communications Commission. We compared NTSB’s strategic plan to selected elements required by GPRA. (See fig. 5.) While NTSB’s 5-year strategic plan has a mission statement, four general goals and related objectives, and mentions key factors, such as declining resources, that could affect the agency’s ability to achieve those goals, the plan lacks a number of key elements—including information about the operational processes; skills and technology; and the human, capital, and information resources—required to meet the goals and objectives. In addition, the goals and objectives lack sufficient specificity to know whether they have been achieved. One goal states “NTSB will maintain its response capacity for investigation of accidents and increase its analysis of incidents.” An objective of that goal is to “continuously assess the most robust and efficient approaches to accident investigation.” Although such a goal is important for the safety of the transportation industry, this and the other three goals and related objectives are not measurable. As a result, it will be difficult for NTSB and others to determine if the goals have been achieved. In addition, the plan lacks specific strategies for achieving those goals. According to GPRA, the strategies should include a description of the operational processes, skills and technology, and the resources required to meet the goals and objectives. Since NTSB’s strategic plan lacks such a description, it does not align staffing, training, or other human resource management to strategic goals. That is, the plan does not explicitly explain how NTSB will use its resources to meet its mission and goals. While the plan explains that each program office has its own objectives linked to the agency’s goals and objectives, the plan contains no information to understand how each office contributes to those goals and objectives. In addition, NTSB’s strategic plan does not describe how the performance goals contained in the annual performance plan are related to the general goals and objectives in the strategic plan, as required by GPRA. GPRA also requires federal agencies to provide a description in their strategic plans of the program evaluations used in establishing or revising general goals and objectives and a schedule for future program evaluation. NTSB’s strategic plan lacks this information. As a result of having no program evaluations, it is unclear how or whether NTSB reviews its efforts to identify strengths it can maximize and weaknesses it should address. In developing a strategic plan, GRPA requires agencies to consult with Congress and other stakeholders. We have previously reported that other stakeholders of federal agencies include state and local governments, other federal agencies, interest groups, and agency employees. NTSB’s strategic plan does not mention consultation with any stakeholders in its development. Furthermore, board members and agency staff told us that they had no involvement in the development of the strategic plan. Some current and past board members additionally stated that they believed that their involvement would be beneficial in providing a strategic vision for the agency. NTSB’s senior management told us they expect to revise the strategic plan in the near future and contacted us regarding assistance to develop a more comprehensive, results-oriented plan as part of this study. NTSB has begun to develop a performance management system that should eventually link each individual’s performance throughout the agency to the agency’s strategic goals and objectives. We have reported that performance management systems are crucial for agencies because if developed properly they allow employees to make meaningful contributions that directly contribute to agency goals. NTSB has developed a comprehensive performance management plan for Senior Executive Series (SES) employees that links individual performance to strategic goals. Furthermore, the plan states that NTSB will link performance management with the agency’s results-oriented goals and set and communicate individual and organizational goals and expectations. This plan establishes individual performance criteria and the appraisal process. The appraisal process defines performance standards and explains performance elements that determine individual ratings. Because NTSB recognizes in this plan the importance of aligning organizational performance with individual performance and contributions to the agency’s mission, the performance management plan is a step in the right direction. Along with the SES plan, NTSB issued in August 2005 a performance plan for its overall workforce, which includes some elements of linking individual performance to organizational goals. However, without having results-oriented goals in the strategic plan itself, neither of the two performance management plans are fully functional. That is, until NTSB’s goals are more fully articulated in the strategic plan, it will be impossible for staff to know whether their performance contributes to meeting those goals. As with the strategic plan, NTSB staff was not involved in the development of the performance plan, and there was no mechanism for employee feedback after the plan was initially developed. Employee involvement provides greater assurance that policies are accepted and implemented because employees had a stake in their development. NTSB developed a draft agencywide staffing plan in December 2005 that follows several leading practices but lacks a workforce deployment strategy that considers the organizational structure and its balance of supervisory and non-supervisory positions. Existing strategic workforce planning tools and models suggest that certain principles should be followed in strategic workforce planning, such as determining the agency’s skills and competencies needs; involving stakeholders (e.g., management and employees) in the planning process; and developing succession plans to anticipate upcoming employee retirement and workforce shifts.Further, in workforce deployment, it is important to have human capital strategies to avoid excess organizational layers and to properly balance supervisory and nonsupervisory positions. NTSB’s draft staffing plan addresses the agency’s skills and competencies needs and includes strategies to deal with workforce shifts. For example, the staffing plan proposes to increase the number of investigative staff by 21, which will help with the agency’s resource needs. In addition, while some stakeholders (i.e., managers) were involved in the planning process, employees were not included. As we mentioned previously in this testimony, employee input provides greater assurance that policies are accepted and implemented because employees have a stake in their development. To develop the staffing plan, each modal office director submitted to NTSB’s Managing Director an ideal staff size for his office, including additional slots for investigators. The increase in investigative staff is consistent with requests by modal offices to enhance their ability to conduct their investigative mission. Managers told us that current staffing constraints inhibited their ability to conduct more accident investigations and indicated an increase in staff would be helpful. For example, directors of the highway and rail/pipeline offices told us they could not initiate investigations on more than two accidents at a time because they lacked sufficient investigative staff to do more. The modal office directors’ request for staff resulted in a total agency allotment of 455 full time equivalents (FTEs) plus 20 co-op positions. The Managing Director reduced this number to 404, which corresponds to NTSB’s current funding level of 395, allowing for attrition and turnover. The Managing Director’s allocation resulted in a proposed increase of 21 investigators agencywide and a proposed reduction of certain staff positions to accommodate the increase in investigators. This increase in investigative staff is consistent with a recommendation by RAND Corporation, which evaluated NTSB’s accident investigation process and workload in 1999. To help implement the realignment, senior managers told us that they would like to transition some existing administrative and support staff with appropriate background and training into investigator roles where possible. The draft plan set a target date of May 2006 to begin creating developmental opportunities for staff to transition to investigative roles and to develop reduction strategies for staff that fall outside the staffing plan. Training is another key area of human capital management. It is important for agencies to develop a strategic approach to training its workforce, which involves establishing training priorities and leveraging investments in training to achieve agency results; identifying specific training initiatives that improve individual and agency performance; ensuring effective and efficient delivery of training opportunities in an environment that supports learning and change; and demonstrating how training efforts contribute to improved performance and results. NTSB has not developed a strategic training plan, nor has it identified the core competencies needed to support its mission and a curriculum to develop those competencies. As a result of not having a core curriculum that is linked in this manner, NTSB lacks assurance that the courses that staff take provide the technical knowledge and skills necessary for them to be competent for the type of work they perform. Sound financial management is crucial for responsible stewardship of federal resources. In recent years, NTSB has made significant progress in improving its financial management. In March 2001, NTSB hired a Chief Financial Officer who has emphasized the importance of sound financial management based on best practices. Similar to private sector companies, government agencies are required to report their financial condition in publicly available financial statements. As a result of actions taken by NTSB, the agency received an unqualified or “clean” opinion from independent auditors on its financial statements for the fiscal years ending September 30 for the years 2003, 2004, and 2005. The audit report concluded that NTSB’s financial statements presented fairly, in all material respects, the financial position, net cost, changes in net position, budgetary resources, and financing in conformity with generally accepted accounting principles for the three years. NTSB has also improved its purchasing and contracting activities after identifying problems in those areas in 1999. In 2001, DOT’s Office of Inspector General (DOTIG) reviewed the agency’s contracting and procurement activities and recommended that NTSB institute accountability and controls in its purchase card program as well as other purchasing activities. As a result of this and another DOTIG audit, NTSB has taken a number of initiatives to improve its purchasing and contracting activities. For example, NTSB restructured its purchase card system and guidelines to address problems, such as unrestrained and unapproved purchases on government credit cards. NTSB hired a manager of the contracting function to manage the agency’s acquisition function and implement the DOTIG recommendations. In our full report, we will analyze some of these initiatives in more detail. In 2000, RAND recommended that NTSB develop systems that would allow the agency to better manage its resources by permitting full-cost accounting of all agency activities. To accomplish this, RAND recommended putting in place a timekeeping system, in which individual project numbers were assigned to each investigation and support activities such as training. With this information, project managers could better understand how staff resources were utilized and project workload could be actively monitored by the Managing Director. NTSB has begun to implement this recommendation by upgrading a software system in November 2005 that tracks employee annual leave and sick leave. However, the system is not being fully utilized to track the number of hours staff spend on each investigation. Also, this system is not used to track time staff spend in training or at conferences. As a result, RAND’s previous conclusion that “NTSB managers have little information they can use to plan the utilization of staff resources or manage staff workloads properly” remains current. We have identified useful practices related to managing employees that include seeking and monitoring employee attitudes, encouraging two-way communication between employees and management, and incorporating employee feedback into new policies and procedures. In response to issues raised by NTSB employees in a governmentwide survey conducted by OPM in 2004, NTSB’s senior management made changes to improve the way it is communicating information to staff. For example, the Managing Director periodically sends “management advisory” e-mail to all staff that share information such as policy changes or new developments at the agency. However, we found no formal processes that encouraged two-way communication, such as town hall meetings, regular staff meetings, or anonymous employee surveys; or incorporated employee feedback into policy-making. The 23 investigators and writer editors with whom we spoke had mixed views on the effectiveness of communications within the agency. The four investigators from one modal office that we spoke with told us that they are pleased to now hear about policy changes at the agency, but said that there is too much reliance on the Internet for these communications. They also told us that although they believe the increased communications are positive, they found it difficult to find the time to read the material and still conduct their regular investigative duties. The four investigators that we spoke with from another modal office agreed that staff meetings occur infrequently and that they do not receive information on new policies from their managers. Further, they said that new policies or agency issues are not discussed with staff prior to issuance, and there was no formal mechanism to provide feedback during the policies’ development. In the past, regular formal meetings occurred between union leadership and senior NTSB management, which allowed for such input, but that practice ceased. Although formal communication processes from the staff level to management are lacking, informal e-mail communications do take place occasionally between staff and senior management. Communication and collaboration across offices at all levels can improve an agency’s ability to carry out its mission by providing opportunities to share best practices and helping to ensure that any needed input is provided in a timely manner. We found that communication and collaboration between the Research and Engineering office and the modal offices appears to be regular. This is shown by the inclusion of Research and Engineering staff as core members of major investigative teams. Also, our review of workload in the Research and Engineering office shows a large number of projects that support all modes, and a Research and Engineering manager told us that his office frequently interacts with investigative staff. In contrast, NTSB lacks processes that would allow investigators and writer editors to communicate across the modal offices regarding the investigative process and other issues, according to staff we spoke to. The four investigators that we spoke with from one modal office told us that they are isolated from the rest of the agency and that lessons learned are not shared across offices. The investigators from another modal office told us that they are on permanent teams that share the same priorities in completing accident analysis, which enhances communication and teamwork in the office. In addition, in previous years, all writer editors were located in one group and reported directly to the Managing Director. Now, each modal office has its own staff of writers and editors. While they have retained personal working relationships from when they were located in the same office, four of the eight writer editors we spoke with said that they no longer share information with each other regularly. As a result, efficiencies and lessons learned that investigators and writer editor staff in one office might develop might not be shared with other offices. However, NTSB officials pointed out that every 6 months writer editors have the opportunity to meet with the publications specialist for training and to exchange information. While NTSB is accomplishing its accident investigation mission, it faces challenges that affect the efficiency of the report production and recommendation close-out processes. In terms of accomplishing its mission, since its inception, NTSB has investigated over 134,000 transportation accidents. Eighty-two percent of its recommendations have been “accepted,” a term NTSB uses to include recommendations that recipients have said they would implement as well as those that have already been implemented. Figure 6 shows that highway recommendations have the highest acceptance rate and marine recommendations have the lowest. Investigations have four phases—the “launch,” fact finding, analysis, and report production. After a report is issued and recommendations made, the progress of implementing the recommendations is tracked during a fifth close-out phase. Figure 7 describes these phases. Investigations are often lengthy and sometimes necessarily so. NTSB routinely takes longer than 2 years to complete major aviation investigations. For example, the total time to complete major aviation investigations has increased from an average of about 1.25 years in 1996 to an average of almost 3.5 years in 2006. (See fig. 8.) In 2004, NTSB contracted with Booz Allen Hamilton to examine and make recommendations to improve the report development process and the recommendation close-out process. Booz Allen Hamilton reported that the average time to complete major investigations across all the modes was either 1.8 months or 1.9 months for 4 out of 5 years. Lengthy investigations, combined with lengthy processes for federal agencies to develop regulations based on those recommendations and industries to implement the recommendations can work against the goal of improving transportation safety. One factor that adds to the duration of investigations is that when new investigations are launched, investigators are pulled from working on previous accidents to work on new ones. For example, when a major commercial aviation accident occurs, an NTSB “go team” is dispatched from Washington D.C., usually within hours of notification of the accident. In such cases, the team members must leave the investigations they had been working on to begin fact-finding on the new accident. In the cases of rail and highway accidents, NTSB investigators must also arrive quickly on scene to gather information because the accident scenes will be cleared quickly so that traffic can resume. The manager of one department told us that all of his ongoing reports would be delayed by 2 months if a sudden launch were to occur. The number of major investigations that are ongoing for each mode is shown in figure 9. Another reason for the expansive time frame for accident investigations is that reports receive multiple revisions at different levels in the organization, including the office directors and the Managing Director’s office, prior to going to the board members for final voting and approval of the draft report. An investigation report typically goes through the following reviews: the modal office, the Office of Research and Engineering, the Executive Secretariat, the Office of Safety Recommendations, the Office of General Counsel, the deputy managing director, the Managing Director’s office, and each board member and the Chairman. For any review, there may be multiple iterations. Eleven investigators and 6 writer editors told us that the review process often results in improved clarity for report recommendations. However, investigators and writer editors also told us that they believe the levels of management review and approval for written products are excessive. All eight writer editors agreed that the reviews by the Executive Secretariat’s office, which services a quality assurance function, was a bottleneck for getting products approved. They told us that it is common for correspondence and other products to be delayed in this office for 1 week or more, which they viewed as excessive. While it may be a reasonable expectation for short products, such as correspondence, to be reviewed in less than a week, that expectation may not be reasonable for reports. Booz Allen Hamilton confirmed multiple iterations of review as the draft was routed through numerous offices. On average Booz Allen Hamilton found that there were 7 levels of reviews within a given modal office that resulted in an average of 28 separate reviews. A senior NTSB official stated that the many levels of review were needed to get the appropriate perspectives from relevant offices that had been involved in report development, such as the Research and Engineering Office and Safety Recommendation Office. The official also noted that the process can be streamlined on a case-by-case basis in which the usual process of sequential reviews is replaced with concurrent reviews. The NTSB official told us that there are no explicit criteria for determining when the streamlined process could be used. NTSB staff with whom we spoke reported that resource issues contributed to other bottlenecks. For example, four writer editors pointed out that NTSB has only one final layout and typesetting person. As of May 2006, the final layout process had a backlog of approximately 10 reports that have been approved for issuance at board meetings but have not yet been published. NTSB adopts about 2 reports a month and issues on average 4 reports a month. In addition, some investigators have the perception that the workload of writer editors is another bottleneck. For example, one investigator told us that he submitted draft reports to the senior writer editor in September 2005, and as of April 2006, no additional writing had been done on his project. Writer editors from each modal office told us they typically worked on five or more products at one time. NTSB has recently taken several actions that, along with potentially better practices in one modal office, may help shorten report development time. First, in response to a recommendation by Booz Allen Hamilton to gain management’s buy-in to the report message before writing the report and thereby reduce the number of review iterations, NTSB management has reemphasized its policy for report development meetings. NTSB has a long-standing order that calls for holding message development meetings with internal stakeholders who will be reviewing the report prior to report writing. According to a senior NTSB official, however, the agency had stopped following that policy before Booz Allen Hamilton conducted its study in 2004. The official further stated that subsequent to that recommendation, NTSB’s managing director sent a memorandum reminding staff to follow the policy. While NTSB has no data on whether the message development meetings are actually taking place, officials told us that the managing director’s recent emphasis on these meetings was resulting in more of them occurring than in previous years. Second, since the spring of 2005, NTSB has initiated production meetings with senior management with the goal of reducing the duration of investigations. These meetings occur every 2 weeks and focus on report development and production. NTSB modal directors are held accountable for a specific issuance date within a six month planning window prior to issuing a report. During the biweekly meetings, the directors discuss with NTSB’s Managing Director and senior executives their progress and commitments to complete the investigations. The meetings result in a production schedule that is available for subsequent review. The modal directors stated that they believe the new system is effective in reducing the duration of investigations; however because these meetings began so recently, it is too early to evaluate their effectiveness. Third, the highway office—which has the swiftest rate of accident investigation completion—uses a concept called a “project manager,” who serves as a supervisory writer editor and interface between the investigative staff and the writer editor staff. As a result, the project manager assumes some of the report development roles typically supported by the investigators-in-charge. In comparison, investigators-in- charge in the marine and rail, pipeline, and hazardous materials offices submit a draft report to the writer editor, who then edits and sometimes substantially rewrites the report. In aviation, investigators-in-charge do not write reports, but rather writer editors develop the final report from interim technical reports drafted by specialists on the team. Booz Allen Hamilton recommended that all modes use a project manager or deputy investigator-in-charge so that the expertise of staff can be used more fully. In addition, such a practice might alleviate some of the workload issues that writer editors face as they complete multiple reports. NTSB managers told us that they agree with this recommendation, but they have not implemented it or developed any milestones for implementation. Fourth, the highway safety office uses an incentive system for performance on developing reports. Booz Allen Hamilton reported the highway safety office rewards staff with a cash bonus for meeting key deadlines for producing accident reports. Again, the study recommended that the highway program be used as a model for the other modal offices. The study further recommended that the incentive program be slightly modified so that the incentives are based on delivering reports before deadlines, rather than meeting deadlines. In that way, the average time standard would be tightened and the overall report development time would be shortened. According to NTSB officials, they are currently examining how to implement improved awards and incentive programs that will result in improved quality and timeliness of report products. The processes for federal transportation agencies to implement NTSB’s safety recommendations, and for NTSB to change the status of recommendations it has made, are also lengthy because of complex processes involving many players. As of May 2006, 305 of NTSB’s 852 open recommendations had been open for 5 years or more. Lengthy processes for federal agencies to develop regulations to implement NTSB’s safety recommendations and industries to comply can work against the goal of quickly improving transportation safety. In addition, the lengthy, paper- based process for changing the status of recommendations ties up NTSB’s scarce resources. The length of time that NTSB recommendations remain open is due, in part, to challenges faced by federal transportation agencies in implementing those recommendations, particularly those that require changes to federal regulations, which take many years to complete. DOT modal officials with whom we spoke cited a lengthy rule-making process, which includes budgeting and allocating resources to develop the proposed regulation, drafting and receiving comments on proposed rules, and waiting for the industry’s subsequent response to implement the final rule. For example, TWA flight 800 crashed off Long Island in July 1996; NTSB issued safety recommendations pertaining to explosive fuel tanks in December 1996. NTSB adopted the accident report with further recommendations to FAA to reduce flammable vapors in aircraft fuel tanks in 2000; FAA issued a notice of proposed rule to address this recommendation in November 2005; the comment period for the notice ended on March 23, 2006. Thus, 10 years after the crash, the final rule has not been issued. Federal transportation officials also said the failure to satisfy a cost-benefit analysis might impede the implementation of NTSB recommendations. Although NTSB is required to only consider the safety implications of its recommendations and not consider the cost factors, if a proposed regulation is not cost beneficial, it cannot be approved by OMB. Federal officials with whom we spoke at DOT, which receives the bulk of NTSB recommendations, indicated that they have been working with NTSB to find acceptable means of implementing recommendations. The process—recently called Safety With a Team—is designed for NTSB and federal agencies to work in cooperation to address open recommendations and implement needed safety improvements. NTSB and DOT officials told us that this process contributed to the closing of many recommendations. However, the process is not used with the Coast Guard, which has the lowest rate—74 percent—for accepting NTSB recommendations among the modes, as mentioned previously. According to a Coast Guard official we spoke with, the Coast Guard believes that it has an acceptable rate for closing NTSB recommendations and that it does not intend to act on recommendations that it deemed unnecessary. NTSB recognizes that open recommendations can have serious safety implications for the transportation industry. To spur implementation, the agency also publishes a “most wanted” list of what it considers the most serious safety concerns. For example, in 2000 NTSB added to its most wanted list the need to improve the safety of motor carrier operations. NTSB recommended that FMCSA prevent motor carriers from operating if they put vehicles with mechanical problems on the road or unqualified drivers behind the wheel. As recently as May 2006, NTSB issued an additional recommendation that FMCSA “establish a program to verify that motor carriers have ceased operations after the effective date of revocation of operating authority.” The process that NTSB uses to change the status of or close out safety recommendations is paper-based, labor intensive, and relies on a series of sequential reviews; this process can take between 6 and 12 weeks. As a result, NTSB is delayed in communicating with agencies on whether NTSB considers the actions that have been taken to address the recommendation are sufficient to accept the recommendation. Consequently, agencies remain unaware that their response has been accepted or not accepted. And in the case of DOT, this lack of information affects its ability to accurately report annually to Congress on the status of implementing NTSB’s recommendations in all its modal administrations. The process of closing recommendations is managed by NTSB’s Safety Recommendation Office, which has responsibility for maintaining a recommendations database and administering the paper flow to change the status of recommendations. Adding complexity to the process—which NTSB calls the “mail control process”—is the fact that there are 12 separate categories of recommendations status. The 12 categories are listed in figure 10, which also shows the percentage of recommendations in each category as of May 1, 2006. The process begins when NTSB receives documentation from the recommendation recipient that would change the recommendation’s status. The Safety Recommendation Office generates paper folders and supervises a process that is summarized in figure 11. This process involves multiple, sequential approvals starting from the Safety Recommendation Office, to the modal offices and Research and Engineering Office, to the Managing Director’s office, to the board members for final approval. Since none of these reviews happen concurrently, some 150 folders are in process at any given time, according to the director of the Safety Recommendations Office. There are no electronic communications or approvals throughout the process. In its study of NTSB, Booz Allen Hamilton identified this as an inefficient process. Officials at NTSB agree that efficiencies could be gained in this process and are considering eventually computerizing a number of processes such as this one. The agency expects to develop such plans after hiring a chief information officer later this year. Although there is no statutory requirement that revenues from NTSB’s academy would generate sufficient revenues to cover the costs, in July 2005, NTSB was encouraged in the Senate report accompanying the Fiscal Year 2006 DOT Appropriations Act to be more aggressive in imposing and collecting fees to cover the costs. The academy generates revenues through tuition fees, space rental to other agencies for events such as conferences, and contracts with federal agencies that would allow them to use academy space for “continuity of operations” in emergency situations. To the extent that NTSB maximizes the use of the academy, it can produce additional revenues that may help cover costs. For the first 2 full years of operation, fiscal years 2004 and 2005, NTSB’s academy did not generate sufficient revenues to cover the costs of providing training, as shown in table 1. As a result, those portions of the academy’s costs that were not covered by the revenues from tuition and other sources—approximately $6.3 million in fiscal year 2004 and $3.9 million in fiscal year 2005—were offset by general appropriations to the agency. The salaries and other personnel related expenses associated with NTSB investigators and managers teaching at the academy, which would be appropriate to include in academy costs, are not included in table 1 because NTSB told us that it does not choose to account for expenses in that manner. In addition, NTSB lacks a full cost-accounting system that would facilitate doing so. The table shows expenses directly associated with the academy and does not include an allocation of agency wide supporting services, such as the Managing Director’s office, information technology, human resources, and legal support. Some of the expenses during these 2 years were one-time expenses—such as over $125,000 for furniture and equipment (included in table 1 as office supplies for fiscal year 2005) and $499,000 to move the wreckage of the TWA flight 800 airplane from storage near the crash site in New York to the academy (included in the table as miscellaneous government contract services in fiscal year 2004). Space rental is a fixed annual expense of about $2.5 million. When that fixed expense is excluded from academy expenses, the remaining operating expenses exceeded revenues by about $3.7 million in fiscal year 2004 and about $1.4 million the subsequent year. In addition, while some courses presented during the first 2 years of academy operation did not recover the costs that NTSB attributes to them, revenues from other courses exceeded the cost. Of the 49 class sessions provided at the academy in fiscal years 2004 and 2005, revenues from 14 sessions, all of which occurred in fiscal year 2005, did not recover their cost, while revenues from the remaining sessions exceeded the cost.According to the academy’s deputy manager, courses are only expected to generate enough revenues to offset the costs specifically attributed to the course, with some additional allocation for research and development of other programs and, if possible, other academy costs. Accordingly, tuition prices are determined by estimating those costs (such as course material, contracted instructors and their travel expenses) and dividing that cost by the projected class size. Costs such as the building lease, maintenance, building security, and academy personnel are not allocated to the costs of individual courses. In addition, consideration is given to setting tuition at a level that is competitive with similar courses by other institutions and that is not prohibitively high for prospective students from government agencies, according to the academy official. Other sources of revenue are needed for NTSB to be able to recover the full costs of the academy. For fiscal year 2004, over $12,000 in revenue (about 5 percent of total revenues) was collected from sources other than course fees to cover some of those costs. For fiscal year 2005, the revenue from other sources increased to over $91,000 (about 14 percent of total revenues). Other sources of income during these 2 years included renting space to other organizations, such as the Society of Automotive Engineers, George Washington University, and the National Association of State Boating Law Administrators for meetings, conferences, and boat storage. In addition, NTSB has contracted with two agencies—the Federal Energy Regulatory Commission and the Virginia Circuit Courts—for continuity of operations. According to NTSB officials, it has explored this option with other organizations, but has not found others who will pay a yearly retainer for the service.While NTSB has taken action to generate revenue from other sources, it does not have a business plan or marketing strategy that seeks to optimize opportunities for additional revenues. According to the academy’s deputy manager, NTSB plans to develop a business plan. The agency, however, has no timeframes for doing so. Our analysis of the academy lease indicates that NTSB has the flexibility to use the facility in other ways to generate revenues or potentially reduce costs. For example, the lease does not preclude NTSB from subletting unused space to other users. Since certain space is already configured as classrooms and the academy is located in an academic setting on George Washington University’s suburban Virginia campus, it may be possible to market space to academic users. Furthermore, NTSB is not precluded by its academy lease or its lease for headquarters space in Washington, D.C., from relocating some headquarters staff to the Virginia facility. The lease for the office space in Washington, D.C., expires in 2011. Such a move, however, would incur one-time costs that include relocating staff, moving furniture and equipment, reconfiguring space and utilities as well as recurring travel costs for staff who must travel between the two locations. Such costs would have to be weighed against the reduced cost of leasing less space in Washington, D.C. NTSB has not maximized the use of the facility, which could generate additional revenues that may help cover costs. We estimate that, overall, less than 10 percent of the total classroom space was used during fiscal year 2005. As shown in figure 12, none of the five classrooms were used for 21 weeks in fiscal year 2005. In addition, at any given time, no more than three classrooms were in use. Figure 12 shows the days in which classroom space was used for 31 class sessions and 12 other events, such as workshops and seminars by organizations that rented the space during fiscal year 2005. While a relatively small percentage of the academy’s students have been NTSB staff, the agency is taking efforts to increase their enrollment at the academy. About 20 percent of the academy’s approximately 1,000 students in fiscal year 2004 were NTSB staff, and about 14 percent of the 1,400 students in fiscal year 2005 were NTSB staff. Over the 2 years, about 400 NTSB students attended 38 of the 49 class sessions conducted at the academy during fiscal years 2004 and 2005. (See fig. 13) NTSB is making efforts to have staff more fully utilize the facility. In fiscal year 2004, 1 of 18 sessions was only for NTSB investigators; in fiscal year 2005, 5 of 31 sessions were only for NTSB investigators. While increasing the use of the academy by NTSB staff would reduce the costs of sending them to external training, it is important that NTSB not reduce the number of external, paying students in the process. NTSB staff receive most of their training from outside the academy, which may be due to the courses lacking the subject matter that they require. Our analysis of staff training requests for fiscal year 2006 showed that 97 percent of all training is expected to be from external sources and the remaining training from NTSB’s academy. NTSB staff have requested external training being provided by organizations that include FAA’s Transportation Safety Institute, the University of Southern California, the U.S. Department of Agriculture, and Kettering University for training in subjects such as human factors in aviation safety, turbine engine investigation, or automotive design and safety. Training requests cover other specialties such as helicopter training, flight training currency for pilots, technical writing, supervisory and management skills, and industry conferences. Investigators and writer editors with whom we spoke had positive views on the quality of academy training courses but provided several reasons for not taking further courses there. Ten of the 23 investigators and writer editors we interviewed told us that they had taken (or taught) courses at the academy and thought the courses were excellent; none of the investigators and writer editors had anything negative to say about the quality of any academy course. However, none of the staff we talked with had plans to attend academy training in fiscal year 2007. One reason noted for this situation was the remoteness of Ashburn, Virginia, from their residences. Another reason was the lack of courses on new transportation technologies and the skills and competencies needed by an investigator-in-charge. Eight investigators told us that they find workshops by manufacturers, such as aircraft and automobile manufacturers, more valuable to their work than academy training. The academy is not utilized more by NTSB staff, in part, because the agency has not developed a core curriculum for its staff that could then be offered at the academy, as mentioned previously in this testimony. The academy only offers one course that is required for NTSB staff—-a 2-week course on aviation accident investigation that is required for new NTSB investigator staff. The deputy manager of the academy told us that the academy plans to eventually offer more internal training covering subjects such as management skills, retirement, and computers. However, no milestones or specific plans have been established for that effort. Although most students at the academy are from outside NTSB, several factors can affect the agency’s ability to attract additional outside students. First, the lack of a business or marketing plan may be affecting NTSB’s ability to fully utilize the academy. Second, academy training is similar to training provided by other institutions. FRA, FAA, and PHMSA officials told us that their investigators do not attend NTSB training because similar training is provided in-house by DOT’s Transportation Safety Institute. For example, an FAA investigator told us that new investigators take a basic accident investigation course at the Transportation Safety Institute and subsequently take mid-career follow- up courses there. Furthermore, our comparison of NTSB’s fiscal year 2006 curriculum with that of several other institutions that teach courses on accident investigations showed that other institutions offered courses similar to 12 of NTSB’s 19 courses. For example, DOT’s Transportation Safety Institute offers basic courses on aviation and bus accident investigations, and the University of Southern California offers a course on human factors related to accident investigations. You asked that we provide information concerning the academy’s use of NTSB investigators as instructors and NTSB’s compliance with the Anti- Deficiency Act, with regard to its accounting for its academy lease. Concerning the first issue, academy courses are taught by a combination of academy staff, NTSB investigators and managers, and contractors. Use of investigators as instructors is limited and is likely to have little impact on investigators’ overall workload. During fiscal year 2005, 51 NTSB investigators or managers taught at the academy. On average they spent an estimated 22 hours to prepare for and teach courses. (See fig. 14.) Finally, NTSB classified its lease for the academy as an operating lease rather than a capital lease. As a result, NTSB has been noncompliant with the Anti-Deficiency Act because it did not obtain budget authority for the net present value of the entire 20-year lease obligation at the time the lease agreement was signed in 2001. NTSB realized the error in 2003 and reported its noncompliance to Congress and the President. NTSB has proposed in the President’s fiscal year 2007 budget to remedy this antideficiency act violation by inserting an amendment in their fiscal year 2007 appropriation that would allow NTSB to fund this obligation from their salaries and expense account through fiscal year 2020. Mr. Chairman, we have developed several conclusions from our analysis of NTSB to date. To the credit of the current leadership at NTSB, much of the agency’s progress toward following leading practices is due to recent management initiatives. The performance management plan, draft staffing plan, and implementation of controls over financial transactions are all positive steps. NTSB’s progress in these areas will likely remain incomplete without additional actions, however. For example, without a more comprehensive strategic plan than it currently has, NTSB cannot align staffing, training, or other human resource management to its strategic goals or align its organizational structure and layers of management with the plan. NTSB will also likely miss opportunities to strengthen the management of the agency until it develops a strategic training plan for its employees, implements a full cost-accounting system, and improves communications within the agency. We have also concluded that, despite the many safety recommendations NTSB has made and seen implemented over the years of its existence, inefficiencies have resulted from the process that the agency uses to close out safety recommendations. In particular, the absence of a computerized documentation system and the sequential reviews that NTSB currently requires slow the process and prevent expedient delivery of information about recommendation status to affected agencies. Finally, in terms of its academy, NTSB is missing opportunities to increase the value of this asset. Without a comprehensive marketing plan, NTSB will likely be unable to efficiently attract users who would help pay the ongoing costs of the facility. To improve the efficiency of agency operations, we are making eight recommendations to the Chairman of the National Transportation Safety Board based on our completed work to date. To improve agency performance in the key functional management areas of strategic planning, human capital planning, financial management, and communications, we recommend that the Chairman implement the following three recommendations: Improve strategic planning by developing a revised strategic plan that follows performance-based practices; developing a strategic training plan that is aligned with the revised strategic plan and identifies skill gaps that pose obstacles to meeting the agency’s strategic goals and curriculum that would eliminate these gaps; and aligning their organizational structure to implement the strategic plan and eliminate unnecessary management layers. Develop a full cost-accounting system that would track the amount of time employees spend on each investigation and in training. Develop mechanisms that will facilitate communications from staff-level employees to senior management, including consideration of contracting out a confidential employee survey to obtain employee feedback on management initiatives. To enhance the efficiency of the report development and recommendation close-out processes, we recommend that the Chairman take the following two actions: Identify better practices in the agency and apply them to all modes. Consider such things as using project managers or deputy investigators-in- charge in all modes, using incentives to encourage performance in report development, and examining the layers of review to find ways to streamline the process, such as eliminating some levels of review and using concurrent reviews as appropriate. Improve the efficiency of the review process for changing the status of recommendations by computerizing the documentation and implementing concurrent reviews. To enhance the utilization of the academy and improve the ability to generate revenues that will cover academy costs, we recommend that the Chairman take the following three actions: Develop a comprehensive marketing plan for the academy. The plan should consider such things as outreach to potential users, working with USDA and GSA to market it as classroom and conference space, and conducting market research for additional curriculum development. If ethical and conflict-of-interest issues can be addressed, the plan should also consider options for allowing transportation manufacturers to conduct company-sponsored symposia and technical training at the academy facility, which would benefit NTSB investigators in keeping up with new technologies. In addition the plan should consider the feasibility of subleasing a portion of the academy space. Develop core investigator curriculum for each mode and maximize the delivery of that training at the academy. Conduct a study to determine the costs and feasibility of moving certain functions from headquarters to the academy facility in preparation for the renegotiation of the headquarters lease, which expires in 2011. We obtained comments on a draft of this testimony from NTSB. NTSB’s Managing Director concurred with our recommendations and provided clarifying comments and technical corrections, which we incorporated as appropriate. In addition, NTSB commented that the draft did not sufficiently distinguish improvements that have been made over the past year. We revised the testimony to more clearly distinguish those actions. To determine the extent to which NTSB is following leading practices in selected management areas, we reviewed past GAO work on leading management practices in the areas of strategic planning, performance management, human capital management, financial management, and communications. We interviewed NSTB board members, senior officials, managers, investigators, and writer editors regarding their experience with those practices at NTSB, and their perceptions of the effectiveness of those practices. We also determined NTSB’s response to recommendations made by the DOTIG. We reviewed NTSB documents, including its strategic, staffing, and performance management plans; management advisory e-mail; and information regarding the current staffing levels; and employees’ training plans for 2006. To determine the extent to which NTSB is developing accident investigation reports and closing safety recommendations in an efficient manner, we interviewed NTSB investigators, writer editors, managers, and senior officials regarding the investigative process and their role in it. We randomly selected 15 of the 210 investigators and 8 writer editors evenly across the 4 modal offices. The views represent the particular individuals and are not representative of all NTSB investigators and writer editors. We reviewed policy guidance on the investigative process and the level of current and past investigation activity. We examined data on recommendations acceptance rates and close-out status from NTSB’s recommendation database, and we determined that the data were sufficiently reliable for the objectives of this review. Additionally, we reviewed studies done by the Rand Corporation and Booz Allen Hamilton that examined NTSB’s investigation process and determined the extent to which the agency had implemented their recommendations. To determine the extent to which NTSB is generating sufficient revenues to cover costs at its academy, we reviewed financial data on NTSB’s academy, including the revenues and expenses for fiscal years 2004 and 2005. We reviewed the course curriculum of the academy, and compared it with classes offered by DOT’s Transportation Safety Institute, Embry Riddle, the University of Southern California, and the Southern California Safety Institute. We examined data on the student makeup of academy classes and analyzed data on the preparatory and teaching time used by NTSB investigators who taught at the academy. We interviewed NTSB investigators, writer editors, and managers and senior officials at DOT’s modal administrations regarding their current and planned use of the academy. Finally, we examined the lease for the academy to determine how NTSB may utilize the space. We conducted our review from December 2005 to May 2006 in accordance with generally accepted government auditing standards. For further information on this testimony, please contact Dr. Gerald Dillingham at (202) 512-2834 or by e-mail at dillinghamg@gao.gov. Individuals making key contributions to this testimony include Teresa Spisak, Colin Fallon, Eric Fielding, Tom Keightley, Maren McAvoy, Josh Ormond, and Jena Whitley. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The National Transportation Safety Board (NTSB) is a relatively small agency that plays a vital role in transportation safety and has a worldwide reputation for investigating accidents. With a staff of about 400 and a budget of $76.7 million in fiscal year 2006, NTSB investigates all civil aviation accidents in the United States, and significant accidents in railroad, highway, marine, and pipeline; and issues safety recommendations to address issues identified during accident investigations. To support its mission, NTSB built a training academy, which opened in 2003 and provides training to NTSB investigators and others. It is important that NTSB use its resources efficiently to carry out its mission and maintain its preeminence. This testimony, based on ongoing work for this committee, addresses the extent to which NTSB follows leading practices in selected management areas, addresses challenges in completing accident investigations and closing safety recommendations, and generates sufficient revenues to cover costs at its academy. NTSB has recently made progress in following leading management practices, but overall has a mixed record. For example, NTSB has improved its financial management by hiring a Chief Financial Officer and putting controls on its purchasing activities, which should address past problems with unapproved purchases. However, NTSB lacks a full cost accounting system, which would inform managers of the resources spent on individual investigations and provide data to balance office workload. NTSB has also begun to develop a performance management system that should eventually link each individual's performance to the agency's strategic goals and objectives. However, the performance management system will not be fully functional until NTSB has developed a strategic plan with results-oriented goals and objectives and specific strategies for achieving them, which are lacking in the current strategic plan. Other areas, such as human capital and communications, partially follow leading practices. While NTSB is accomplishing its accident investigation mission, it faces challenges that affect the efficiency of the report production and recommendation close out processes. NTSB routinely takes longer than 2 years to complete major investigations. Several factors may affect the length of report production, including several revisions of draft reports through multiple layers of the organization. In addition, the processes for federal transportation agencies to implement NTSB's safety recommendations and for NTSB to change the status of recommendations are lengthy, paper-based, and labor intensive. While Department of Transportation officials have been working with NTSB to find acceptable means of implementing its recommendations, they cite the lengthy rule-making process as a challenge to speedy implementation. For fiscal years 2004 and 2005, NTSB's academy did not generate sufficient revenues to cover the costs of providing training. As a result, those portions of the academy's costs that were not covered by the revenues from tuition and other sources--approximately $6.3 million in fiscal year 2004 and $3.9 million in fiscal year 2005--were offset by general appropriations to the agency. While NTSB has taken action to generate revenue from other sources, such as renting academy space for conferences, it does not have a marketing plan that seeks to optimize opportunities for additional revenues at the academy. |
The Stafford Act, as amended, outlines the federal government’s role during disaster response and recovery when the President declares a major disaster after a governor or chief executive of an affected tribal government finds that effective response is beyond the capabilities of the state, tribal, and local governments. The Stafford Act defines a “major disaster” as any natural catastrophe (including any hurricane, tornado, storm, high water, winddriven water, tidal wave, tsunami, earthquake, volcanic eruption, landslide, mudslide, snowstorm, or drought), or, regardless of cause, any fire, flood, or explosion, in any part of the United States, which the President determines causes damage of sufficient severity and magnitude to warrant major disaster assistance to supplement the efforts and available resources of states, local governments, and disaster relief organizations in alleviating damage, loss, hardship, or suffering. If the President declares a major disaster, the declaration can trigger a variety of federal assistance programs through which the federal government provides disaster assistance to state, tribal, territorial, and local governments, as well as certain nonprofit organizations and individuals. In addition to its central role in recommending to the President whether to declare a disaster, FEMA is the primary federal agency responsible for mitigating, responding to, and recovering from disasters, both natural and man-made, and has responsibility for coordinating the assistance provided under the provisions of the Stafford Act. The DRF is the primary source of federal disaster assistance for state and local governments when a disaster is declared. The DRF is appropriated no-year funding, which allows FEMA to fund, direct, coordinate, and manage response and recovery efforts—including certain efforts by other federal agencies and state and local governments, among others—associated with domestic disasters and emergencies. FEMA tracks DRF obligations according to the following six categories: Public Assistance. The Public Assistance Program provides financial assistance to state, tribal, territorial, and local governments for debris removal; emergency protective measures; and the repair, replacement, or restoration of disaster-damaged, publicly owned facilities and the facilities of certain private nonprofit organizations that provide services otherwise performed by a government agency. Individual Assistance. The Individual Assistance Program provides financial assistance directly to disaster victims for the necessary expenses and serious needs that cannot be met through insurance or low-interest Small Business Administration loans. For example, FEMA may provide temporary housing assistance, counseling, unemployment compensation, or medical expenses incurred by individuals as a result of a disaster. Hazard Mitigation. The Hazard Mitigation Grant Program provides funds to state, tribal, territorial, and local governments, among other entities, to assist communities in implementing long-term measures to help reduce the potential risk of future damages to facilities. Fire Management Assistance. The Fire Management Assistance Grant Program makes fire management assistance available to state, local and tribal governments for the mitigation, management, and control of fires on publicly or privately owned forests or grasslands which threaten such destruction as would constitute a major disaster. Mission Assignment. The Stafford Act also authorizes FEMA to issue work orders—i.e., mission assignments—with or without reimbursement, that direct another federal agency to utilize its authorities and the resources granted to it under federal law in support of direct assistance to state, local, tribal, and territorial governments during emergency and major disaster declarations. FEMA may use the DRF to reimburse other federal agencies for eligible costs incurred under a mission assignment. Administration. FEMA also obligates funds from the DRF to cover its administrative costs—that is, costs that support the delivery of disaster assistance. FEMA’s administrative costs include the salary and travel costs for the disaster workforce, rent and security expenses associated with field operation locations, and supplies and information technology for field operation staff, among other things. In March 2011, the White House issued Presidential Policy Directive 8: National Preparedness (PPD-8) with the goal of strengthening the security and resilience of the nation through systematic preparation for the threats that pose the greatest risk. PPD-8 called for the development of a series of policy and planning documents in five mission areas—Prevention, Protection, Mitigation, Response, and Recovery—to explain and guide the nation’s approach for ensuring and enhancing national preparedness for a wide range of threats and hazards. These National Planning Frameworks serve as the basis for mission area activities within FEMA, throughout the federal government, and at the state and local levels. Among other functions, the frameworks describe the coordinating structures and alignment of key roles and responsibilities to federal agencies and are integrated to ensure interoperability across all mission areas in mitigating, responding to, and recovering from a wide range of both Stafford-Act and non-Stafford Act disasters and emergencies. The following three frameworks are relevant to this report: The National Mitigation Framework establishes a common platform and forum for coordinating and addressing how the nation manages risk through mitigation capabilities. Mitigation reduces the impact of disasters by supporting protection and prevention activities, easing response, and speeding recovery to create better prepared and more resilient communities. This framework addresses how the nation will develop, employ, and coordinate core mitigation capabilities to reduce loss of life and property by lessening the impact of disasters. Mitigation activities are not limited to eligible activities within the Stafford Act. The National Response Framework (NRF) describes how the nation responds to all types of disasters and emergencies. The NRF is the overarching interagency response coordination structure for both Stafford Act and non-Stafford Act incidents, and describes specific authorities and best practices for managing incidents ranging in scope from local to large-scale, among other things. The NRF identifies 14 Emergency Support Functions (ESF) that serve as the federal government’s primary coordinating structure for building, sustaining, and delivering response capabilities. ESF annexes to the NRF describe the federal coordinating structures that group resources and capabilities into functional areas that are most frequently needed in a national response. Each ESF consists of a federal department or agency designated as the coordinating agency along with a number of primary and support agencies. The National Disaster Recovery Framework (NDRF) establishes a comprehensive structure to enhance the nation’s ability to work together, both before and after a disaster, to effectively deliver recovery assistance through the coordinated efforts of federal, state, local, and tribal governments and nongovernmental organizations. While the NDRF provides the overarching interagency coordination structure for the recovery phase of incidents under the Stafford Act, its structures and procedures apply equally to non-Stafford Act incidents, such as federal response to an oil spill of national significance. The NDRF identifies six Recovery Support Functions (RSF) as the mechanisms through which federal agencies are to provide assistance and support to state and local communities, both before and after a disaster. These RSFs are intended to, among other things, facilitate problem solving; improve access to resources; ensure more effective and efficient use of federal, state, nongovernmental and private sector funds; and foster coordination among state and federal agencies and nongovernmental entities. Similar to the ESFs, each RSF consists of a federal department or agency designated as the coordinating agency along with a number of primary and support agencies. While FEMA coordinates assistance for incidents in which federal assistance is provided under the Stafford Act and the National Planning Frameworks generally apply to federal roles and responsibilities for both Stafford and non-Stafford Act incidents, federal response or assistance to a disaster event may also be led or coordinated by various federal departments and agencies consistent with their own authorities. Specifically, independent of the Stafford Act, the heads of some federal departments and agencies—such as the Administrator of the Small Business Administration (SBA) and the Secretaries of Agriculture and Commerce—also have separate statutory authority to declare a disaster under certain circumstances for the purpose of providing assistance. For example, the Secretary of Agriculture is authorized to designate counties as disaster areas to make emergency loans to agricultural producers suffering losses in that county or other contiguous counties as was done for the recent California drought. Following a request from a state governor, the Administrator of the Small Business Administration (SBA) can separately make a physical disaster declaration based on the occurrence of at least a minimum amount of damage to buildings, machinery, equipment, inventory, homes and other property, which enables SBA to make disaster loans available to homeowners, renters, businesses of all sizes, and private nonprofits. During fiscal years 2005 through 2014, the federal government obligated at least $277.6 billion across 17 federal departments and agencies for disaster assistance programs and activities. This estimate constitutes total obligations identifiable to disaster activities across three categories of disaster assistance: the DRF, disaster-specific programs and activities identified across the 17 federal departments and agencies, and disaster- applicable programs and activities identified across the 17 federal departments and agencies. This estimate represents a minimum and not the total amount of disaster assistance spending by the federal government during this period because some federal departments and agencies reported that relevant obligations and expenditures for some disaster-applicable programs and activities during this time frame are not separately tracked or are not available. For example, some disaster assistance programs or activities are not separately tracked because spending related to these activities is generally subsumed by a department’s general operating budget or mission-related costs. Figure 1 depicts the three categories of federal disaster assistance and the estimated total obligations for each category. Including administrative costs, FEMA reported obligating approximately $104.5 billion from the DRF for disaster assistance during fiscal years 2005 through 2014. Table 1 identifies the six DRF categories and details total obligations for those categories. Federal departments may play significant roles in response activities depending on the nature and size of an incident. Many of the arrangements by which departments participate are defined in the ESF annexes and coordinated through pre-scripted mission assignments in a Stafford Act response. For example, pre-scripted mission assignments for the Department of Defense support include emergency route clearance, airspace control, and deployable temporary medical facilities, among other things. Table 2 identifies the 17 federal departments and agencies within our scope and details total obligations that FEMA provided each department in DRF Mission Assignment reimbursements. We provide further details on FEMA’s DRF Mission Assignments in appendix I. Seventeen federal departments and agencies collectively obligated approximately $132.2 billion for disaster assistance from disaster-specific programs and activities during fiscal years 2005 through 2014. Table 3 identifies the 17 federal departments and agencies within our scope and total obligations for those programs and activities. Examples of disaster- specific programs include FEMA’s National Flood Insurance program, which provides for the sale of insurance against flood damages, and the Department of Housing and Urban Development’s Community Development Block Grants - Disaster Recovery Program, which provides grants to help cities, counties, parishes, and states recover from presidentially declared disasters. We provide further details on each department’s disaster-specific programs and activities in appendix II. Seventeen federal departments and agencies collectively obligated approximately $40.9 billion for disaster assistance from disaster- applicable programs and activities during fiscal years 2005 through 2014. Table 4 identifies the 17 federal departments and agencies within our scope and total obligations for those programs and activities. Examples of disaster-applicable programs include the Department of Agriculture’s Federal Crop Insurance program, which provides disaster applicable indemnity payments to American farmers and ranchers for significant losses due to adverse weather such as drought, among other causes, and the Department of Health and Human Services’ National Bioterrorism Hospital Preparedness Program, which provides funding to public health departments in states and cities to save lives during emergencies that exceed day-to-day capacity of the health and emergency response systems. We provide further details on each department’s disaster- applicable programs and activities in appendix II. Our estimate of $277.6 billion in obligations for disaster assistance programs and activities represents a minimum and not the total amount of disaster assistance spending by the federal government during fiscal years 2005 through 2014 because some federal departments and agencies reported that relevant obligations and expenditures for some programs and activities during this time frame are not separately tracked or are not available. Specifically, more than half of the 17 federal departments and agencies in our scope reported that obligations for certain disaster assistance programs or activities during fiscal years 2005 through 2014 are not separately tracked or are not available, for various reasons. At least 5 federal departments and agencies reported that some disaster assistance programs or activities are not separately tracked because spending related to these activities is generally subsumed by a department’s general operating budget or mission-related costs. For example, U.S. Coast Guard officials stated that most of the agency’s disaster-related costs are associated with maintaining a constant state of readiness to immediately respond to disaster and emergency incidents, which is funded from the U.S. Coast Guard search and rescue appropriation and is not separately tracked. Similarly, the Army has deployed personnel in anticipation of a possible disaster event, even when FEMA has not requested the support. If a disaster does not occur or the activity does not result in a FEMA mission assignment, the Army will not be reimbursed for prepositioning personnel or assets in anticipation of an event and therefore may categorize the expenditure as training in the event of a disaster. Another 4 federal departments and agencies reported that obligations and expenditures specific to disaster assistance activities are not tracked or cannot be reliably estimated because there is no requirement for state or other recipients of the financial support to indicate whether or how much of the funding or assistance is used for disasters. We provided a draft of this product to all 17 federal departments and agencies included in this review for comment. The Department of Defense, Department of Education, Department of Energy, Department of Health and Human Services, Department of Homeland Security, Department of Housing and Urban Development, Department of Justice, Department of Labor, Department of Veterans Affairs, and the Small Business Administration provided technical comments, which we incorporated as appropriate. In its agency comments, the Department of Veterans Affairs also provided summary information about an additional disaster assistance program that was not previously identified in the Department’s responses to our data collection instrument. Due to insufficient data received about the Comprehensive Emergency Management Program, we were not able to include this program in our final report. We are sending copies of this report to the appropriate congressional committees and the Secretary or equivalent of each of the 17 federal departments and agencies included in this review. If you or your staff have any questions about this report, please contact me at (404) 679- 1875 or curriec@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other key contributors to this report are listed in appendix III. This appendix presents detailed information on FEMA DRF obligations and expenditures during fiscal years 2005 through 2014. Specifically, Table 5 provides detailed information on FEMA DRF obligations and expenditures for each of the six DRF categories—Public Assistance, Individual Assistance, Mission Assignment, Hazard Mitigation, Fire Management Assistance, and Administration—for each fiscal year of our review period; Tables 6 and 7 provide detailed information on FEMA’s Public Assistance Division programs and activities funded by the DRF and the obligations and expenditures, where available, for these programs and activities during fiscal years 2005 through 2014; Tables 8 and 9 provide detailed information on FEMA’s Individual Assistance programs and activities funded by the DRF and the obligations and expenditures, where available, for these programs and activities during fiscal years 2005 through 2014; and Table 10 provides detailed information on FEMA Mission Assignment obligations and expenditures funded by the DRF for each federal department and for each fiscal year of our review period. Information and data provided in the first five tables is based on the Department of Homeland Security’s response to our data collection instrument and related documentation. Information and data on Mission Assignment obligations and expenditures for each federal department during fiscal years 2005 through 2014 were obtained directly from FEMA. This appendix presents detailed information on (1) federal disaster assistance programs and activities—specifically, disaster-specific and disaster-applicable programs or activities that are or can be used to mitigate (including pre-disaster), respond to, or recover from a disaster incident—and (2) obligations and expenditures, where available, for these programs and activities during fiscal years 2005 through 2014, for each of the 17 federal departments and agencies reviewed.1, 2 Specifically, each departmental overview provides detailed information organized into the following five tables: Disaster-Specific Programs and Activities during Fiscal Years 2005 Disaster-Specific Obligations and Expenditures during Fiscal Years Disaster-Applicable Programs and Activities during Fiscal Years 2005 Disaster-Applicable Obligations and Expenditures during Fiscal Years 2005 through 2014; and Mission Assignment Obligations and Expenditures during Fiscal Years 2005 through 2014. An obligation is a definite commitment that creates a legal liability of the government for the payment of goods and services ordered or received. An expenditure is an amount paid by federal agencies, by cash or cash equivalent, during the fiscal year to liquidate government obligations. The 17 federal departments and agencies we selected include: Department of Agriculture, Department of Commerce, Department of Defense, Department of Education, Department of Energy, Department of Health and Human Services, Department of Homeland Security, Department of Housing and Urban Development, Department of the Interior, Department of Justice, Department of Labor, Department of Transportation, Department of the Treasury, Department of Veteran Affairs, Environmental Protection Agency, General Services Administration, and Small Business Administration. 2014, these obligations and expenditures data were obtained directly from FEMA. The Department of Agriculture (USDA) reported obligating approximately $50.2 billion for disaster assistance during fiscal years 2005 through 2014. USDA reported that its disaster-specific assistance programs and activities (described in table 11) obligated approximately $20.9 billion during fiscal years 2005 through 2014 (as shown in table 12). USDA reported that its disaster-applicable assistance programs and activities (described in table 13) obligated approximately $29.3 billion during fiscal years 2005 through 2014 (as shown in table 14). The above amounts exclude an additional $275 million that DHS’s FEMA reported obligating from the Disaster Relief Fund in reimbursements to USDA for eligible disaster assistance costs incurred under a mission assignment during fiscal years 2005 through 2014, as shown in table 15. The Department of Commerce (DOC) reported obligating over $2.4 billion for disaster assistance during fiscal years 2005 through 2014. DOC reported that its disaster-specific assistance programs and activities (described in table 16) obligated approximately $840 million during fiscal years 2005 through 2014 (as shown in table 17). DOC reported that its disaster-applicable assistance programs and activities (described in table 18) obligated over $1.6 billion during fiscal years 2005 through 2014 (as shown in table 19). The above amounts exclude an additional $6 million that DHS’s FEMA reported obligating from the Disaster Relief Fund in reimbursements to DOC for eligible disaster assistance costs incurred under a mission assignment during fiscal years 2005 through 2014, as shown in table 20. The Department of Defense (DOD) reported obligating approximately $10.8 billion for disaster assistance during fiscal years 2005 through 2014. DOD reported that its disaster-specific assistance programs and activities (described in table 21) obligated approximately $10.8 billion during fiscal years 2005 through 2014 (as shown in table 22). DOD reported that its disaster-applicable assistance programs and activities (described in table 23) obligated approximately $3 million during fiscal years 2005 through 2014 (as shown in table 24). The above amounts exclude an additional $5.2 billion that DHS’s FEMA reported obligating from the Disaster Relief Fund in reimbursements to DOD for eligible disaster assistance costs incurred under a mission assignment during fiscal years 2005 through 2014, as shown in table 25. The Department of Education (ED) reported obligating approximately $247 million for disaster assistance during fiscal years 2005 through 2014. ED reported that its disaster-specific assistance programs and activities (described in table 26) obligated approximately $247 million during fiscal years 2005 through 2014 (as shown in table 27). ED did not report any disaster-applicable programs and activities during fiscal years 2005 through 2014. The above amount excludes an additional $27,000 that DHS’s FEMA reported obligating from the Disaster Relief Fund in reimbursements to ED for eligible disaster assistance costs incurred under a mission assignment during fiscal years 2005 through 2014, as shown in table 28. The Department of Energy (DOE) reported obligating approximately $48 million for disaster assistance during fiscal years 2005 through 2014. DOE reported that one disaster-applicable assistance program (described in table 29) obligated approximately $48 million during fiscal years 2005 through 2014 (as shown in table 30). The above amount excludes an additional $5 million that DHS’s FEMA reported obligating from the Disaster Relief Fund in reimbursements to DOE for eligible disaster assistance costs incurred under a mission assignment during fiscal years 2005 through 2014, as shown in table 31. The Department of Health and Human Services (HHS) reported obligating approximately $8.8 billion for disaster assistance during fiscal years 2005 through 2014. HHS reported that its disaster-specific assistance programs and activities (described in table 32) obligated over $3.8 billion during fiscal years 2005 through 2014 (as shown in table 33). HHS reported that its disaster-applicable assistance programs and activities (described in table 34) obligated approximately $5 billion during fiscal years 2005 through 2014 (as shown in table 35). The above amounts exclude an additional $173 million that DHS’s FEMA reported obligating from the Disaster Relief Fund in reimbursements to HHS for eligible disaster assistance costs incurred under a mission assignment during fiscal years 2005 through 2014, as shown in table 36. The Department of Homeland Security (DHS) reported obligating over $41 billion for disaster assistance during fiscal years 2005 through 2014 from both disaster-specific and disaster-applicable programs and activities that were funded from sources other than FEMA’s Disaster Relief Fund (DRF). DHS reported that its disaster-specific assistance programs and activities (described in table 37) obligated approximately $39 billion during fiscal years 2005 through 2014 (as shown in table 38). DHS reported that its disaster-applicable assistance programs and activities (described in table 39) obligated over $2 billion during fiscal years 2005 through 2014 (as shown in table 40). The above amounts exclude an additional $525 million that DHS’s FEMA reported obligating from the DRF in reimbursements to DHS for eligible disaster assistance costs incurred under a mission assignment during fiscal years 2005 through 2014, as shown in table 41. The Department of Housing and Urban Development (HUD) reported obligating approximately $30.6 billion for disaster assistance during fiscal years 2005 through 2014. HUD reported that its disaster-specific assistance programs and activities (described in table 42) obligated approximately $30.6 billion during fiscal years 2005 through 2014 (as shown in table 43). HUD reported that its disaster-applicable assistance programs and activities (described in table 44) obligated approximately $8 million during fiscal years 2005 through 2014 (as shown in table 45). The above amounts exclude an additional $45 million that DHS’s FEMA reported obligating from the Disaster Relief Fund in reimbursements to HUD for eligible disaster assistance costs incurred under a mission assignment during fiscal years 2005 through 2014, as shown in table 46. The Department of the Interior (DOI) reported obligating approximately $3.5 billion for disaster assistance during fiscal years 2005 through 2014. DOI reported that its disaster-specific assistance programs and activities (described in table 47) obligated approximately $1.9 billion during fiscal years 2005 through 2014 (as shown in table 48). DOI reported that its disaster-applicable assistance programs and activities (described in table 49) obligated approximately $1.6 billion during fiscal years 2005 through 2014 (as shown in table 50). The above amounts exclude an additional $12 million that DHS’s FEMA reported obligating from the Disaster Relief Fund in reimbursements to DOI for eligible disaster assistance costs incurred under a mission assignment during fiscal years 2005 through 2014, as shown in table 51. The Department of Justice (DOJ) reported obligating approximately $50 million for disaster assistance during fiscal years 2005 through 2014. DOJ did not report any disaster-specific programs and activities during fiscal years 2005 through 2014. DOJ reported that its disaster-applicable assistance programs and activities (described in table 52) obligated approximately $50 million during fiscal years 2005 through 2014 (as shown in table 53). The above amount excludes an additional $26 million that DHS’s FEMA reported obligating from the Disaster Relief Fund in reimbursements to DOJ for eligible disaster assistance costs incurred under a mission assignment during fiscal years 2005 through 2014, as shown in table 54. The Department of Labor (DOL) reported obligating approximately $961 million for disaster assistance during fiscal years 2005 through 2014. DOL reported that its disaster-specific assistance programs and activities (described in table 55) obligated approximately $6.5 million during fiscal years 2005 through 2014 (as shown in table 56). DOL reported that its disaster-applicable assistance programs and activities (described in table 57) obligated approximately $954 million during fiscal years 2005 through 2014 (as shown in table 58). The above amounts exclude an additional $6 million that DHS’s FEMA reported obligating from the Disaster Relief Fund in reimbursements to DOL for eligible disaster assistance costs incurred under a mission assignment during fiscal years 2005 through 2014, as shown in table 59. The Department of Transportation (DOT) reported obligating approximately $15.6 billion for disaster assistance during fiscal years 2005 through 2014. DOT reported that its disaster-specific assistance programs and activities (described in table 60) obligated approximately $15.5 billion during fiscal years 2005 through 2014 (as shown in table 61). DOT reported that its disaster-applicable assistance programs and activities (described in table 62) obligated approximately $138 million during fiscal years 2005 through 2014 (as shown in table 63). The above amounts exclude an additional $502 million that DHS’s FEMA reported obligating from the Disaster Relief Fund in reimbursements to DOT for eligible disaster assistance costs incurred under a mission assignment during fiscal years 2005 through 2014, as shown in table 64. The Department of the Treasury (Treasury) reported obligating approximately $13 million for disaster assistance during fiscal years 2005 through 2014. The Internal Revenue Service reported that its disaster-specific assistance programs and activities (described in table 65) obligated approximately $13 million during fiscal years 2005 through 2014 (as shown in table 66). Treasury reported providing disaster assistance from disaster- applicable programs and activities during fiscal years 2005 through 2014, described in table 67. However, Treasury could not provide separate obligations and expenditures data because all spending related to these disaster-applicable activities is subsumed by Treasury’s general operating budget and this disaster assistance is not separately tracked or accounted for. The above amounts exclude an additional $2 million that DHS’s FEMA reported obligating from the Disaster Relief Fund in reimbursements to Treasury for eligible disaster assistance costs incurred under a mission assignment during fiscal years 2005 through 2014, as shown in table 68. The Department of Veterans Affairs (VA) reported obligating approximately $59 million for disaster assistance during fiscal years 2005 through 2014. VA reported that its disaster-specific assistance programs and activities (described in table 69) obligated $124,000 during fiscal years 2005 through 2014 (as shown in table 70). VA reported that its disaster-applicable assistance programs and activities (described in table 71) obligated approximately $59 million during fiscal years 2005 through 2014 (as shown in table 72). The above amounts exclude an additional $3 million that DHS’s FEMA reported obligating from the Disaster Relief Fund in reimbursements to VA for eligible disaster assistance costs incurred under a mission assignment during fiscal years 2005 through 2014, as shown in table 73. The Environmental Protection Agency (EPA) reported obligating approximately $3.6 billion for disaster assistance during fiscal years 2005 through 2014. EPA reported that its disaster-specific assistance programs and activities (described in table 74) obligated approximately $3.6 billion during fiscal years 2005 through 2014 (as shown in table 75). EPA also reported providing disaster assistance from disaster- applicable programs and activities during fiscal years 2005 through 2014, described in table 76. However, EPA does not separately track disaster-related obligations or expenditures data for its disaster- applicable programs and activities. For example, while both the Clean Water and Drinking Water State Revolving Funds have billions in obligations and expenditures each year, there is no requirement for recipients to indicate whether any loans are disaster-related. The above amounts exclude an additional $329 million that DHS’s FEMA reported obligating from the Disaster Relief Fund in reimbursements to EPA for eligible disaster assistance costs incurred under a mission assignment during fiscal years 2005 through 2014, as shown in table 77. The General Services Administration (GSA) reported obligating approximately $19 million for disaster assistance during fiscal years 2005 through 2014. GSA reported that its disaster-specific assistance programs and activities (described in table 78) obligated approximately $19 million during fiscal years 2005 through 2014 (as shown in table 79). GSA also reported providing disaster assistance from disaster- applicable programs and activities during fiscal years 2005 through 2014, described in table 80. However, GSA did not provide separate obligations and expenditures data because all spending related to these disaster-applicable activities is funded by other agencies. The above amount excludes an additional $67 million that DHS’s FEMA reported obligating from the Disaster Relief Fund in reimbursements to GSA for eligible disaster assistance costs incurred under a mission assignment during fiscal years 2005 through 2014, as shown in table 81. The Small Business Administration (SBA) reported obligating over $4.9 billion for disaster assistance during fiscal years 2005 through 2014. SBA reported that its disaster-specific assistance programs and activities (described in table 82) obligated approximately $4.9 billion during fiscal years 2005 through 2014 (as shown in table 83). SBA reported that its disaster-applicable assistance programs and activities (described in table 84) obligated approximately $29 million during fiscal years 2005 through 2014 (as shown in table 85). DHS’s FEMA did not obligate any funds from the Disaster Relief Fund to SBA for mission assignments during fiscal years 2005 through 2014. In addition to the contact named above, Kathryn Godfrey (Assistant Director), Hugh Paquette (Analyst-in-Charge), Carissa Bryant, Eli Harpst, Eric Hauswirth, Tracey King, Amanda Miller, Heidi Nielson, Ashley Rawson, and Aaron Safer-Lichtenstein made key contributions to this report. Disaster Recovery: FEMA Needs to Assess Its Effectiveness in Implementing the National Disaster Recovery Framework. GAO-16-476. Washington, D.C.: May 26, 2016. Disaster Response: FEMA Has Made Progress Implementing Key Programs, but Opportunities for Improvement Exist. GAO-16-87. Washington, D.C.: February 5, 2016. Wildland Fire Management: Agencies Have Made Several Key Changes but Could Benefit from More Information about Effectiveness. GAO-15-772. Washington, D.C.: September 16, 2015. Disaster Relief: Agencies Need to Improve Policies and Procedures for Estimating Improper Payments. GAO-15-209. Washington, D.C.: February 27, 2015. Hurricane Sandy: An Investment Strategy Could Help the Federal Government Enhance National Resilience for Future Disasters. GAO-15-515. Washington, D.C.: July 30, 2015. Budgeting for Disasters: Approaches to Budgeting for Disasters in Selected States. GAO-15-424. Washington, D.C.: March 26, 2015. Disaster Relief: Agencies Need to Improve Policies and Procedures for Estimating Improper Payments. GAO-15-209. Washington, D.C.: Feb. 27, 2015. Federal Emergency Management Agency: Opportunities Exist to Strengthen Oversight of Administrative Costs for Major Disasters. GAO-15-65. Washington, D.C.: December 17, 2014. Hurricane Sandy: FEMA Has Improved Disaster Aid Verification but Could Act to Further Limit Improper Assistance. GAO-15-15. Washington, D.C.: December 12, 2014. Emergency Preparedness: Opportunities Exist to Strengthen Interagency Assessments and Accountability for Closing Capability Gaps. GAO-15-20. Washington, D.C.: December 4, 2014. Climate Change: Better Management of Exposure to Potential Future Losses Is Needed for Federal Flood and Crop Insurance. GAO-15-28. Washington, D.C.: October 29, 2014. Disaster Resilience: Actions Are Underway, but Federal Fiscal Exposure Highlights the Need for Continued Attention to Longstanding Challenges. GAO-14-603T. Washington, D.C.: May 14, 2014. National Preparedness: Actions Taken by FEMA to Implement Select Provisions of the Post-Katrina Emergency Management Reform Act of 2006. GAO-14-99R. Washington, D.C.: November 26, 2013. Hurricane Sandy Relief: Improved Guidance on Designing Internal Control Plans Could Enhance Oversight of Disaster Funding. GAO-14-58. Washington, D.C.: November 26, 2013. Civil Support: Actions Are Needed to Improve DOD’s Planning for a Complex Catastrophe. GAO-13-763. Washington, D.C.: September 30, 2013. Federal Disaster Assistance: Improved Criteria Needed to Assess a Jurisdiction’s Capability to Respond and Recover on Its Own. GAO-12-838. Washington, D.C.: September 12, 2012. Disaster Recovery: Federal Contracting in the Aftermath of Hurricanes Katrina and Rita. GAO-11-942T. Washington, D.C.: September 15, 2011. Homeland Security: Actions Needed to Improve Response to Potential Terrorist Attacks and Natural Disasters Affecting Food and Agriculture. GAO-11-652. Washington, D.C.: August 19, 2011. Measuring Disaster Preparedness: FEMA Has Made Limited Progress in Assessing National Capabilities. GAO-11-260T. Washington, D.C.: March 17, 2011. Disaster Response: Criteria for Developing and Validating Effective Response Plans. GAO-10-969T. Washington, D.C.: September 22, 2010. Homeland Defense: DOD Needs to Take Actions to Enhance Interagency Coordination for Its Homeland Defense and Civil Support Missions. GAO-10-364. Washington D.C.: March 30, 2010. Disaster Recovery: Experiences from Past Disasters Offer Insights for Effective Collaboration after Catastrophic Events. GAO-09-811. Washington, D.C.: July 31, 2009. National Preparedness: FEMA Has Made Progress, but Needs to Complete and Integrate Planning, Exercise, and Assessment Efforts. GAO-09-369. Washington, D.C.: April 30, 2009. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. | Each year, the federal government obligates billions of dollars through programs and activities that provide assistance to state and local governments, tribes, and certain nonprofit organizations and individuals that have suffered injury or damages from major disaster or emergency incidents, such as hurricanes, tornados, or fires. While FEMA tracks DRF spending related to major disasters and emergencies declared under the Robert T. Stafford Disaster Relief and Emergency Assistance Act, there has not been a systematic effort to account for federal obligations for disaster assistance outside of the DRF. The Joint Explanatory Statement accompanying the Consolidated and Further Continuing Appropriations Act, 2015, includes a provision for GAO to report on disaster assistance expenditures by the federal government. This report identifies federal disaster assistance programs and activities across 17 federal departments and agencies and the obligations for these programs and activities, where available, during fiscal years 2005 through 2014. To conduct this work, GAO selected 17 federal departments and agencies identified in the National Planning Frameworks as having responsibility for leading or coordinating federal efforts to mitigate, respond to, and recover from domestic disaster incidents. GAO analyzed documents identifying and describing disaster assistance programs and activities, interviewed federal officials, and distributed a data collection instrument to obtain, among other things, obligation amounts associated with each program or activity identified. During fiscal years 2005 through 2014, the federal government obligated at least $277.6 billion across 17 federal departments and agencies for disaster assistance programs and activities. This estimate constitutes total obligations identifiable to disaster activities across three categories: the Federal Emergency Management Agency's (FEMA) Disaster Relief Fund (DRF), disaster-specific programs and activities identified across the 17 departments and agencies, and disaster-applicable programs and activities across the 17 departments and agencies (see figure). The estimate of $277.6 billion represents a minimum and not the total amount of disaster assistance spending by the federal government during fiscal years 2005 through 2014 because relevant obligations for some programs and activities are not separately tracked or are not available. Specifically, GAO found that more than half of the 17 departments and agencies in the scope of this review reported that obligations for certain disaster assistance programs or activities during this time frame are not separately tracked or are not available, for various reasons. For example, 5 departments and agencies reported that some disaster assistance programs or activities are not separately tracked because spending related to these activities is generally subsumed by a department's general operating budget or mission-related costs. Another 4 departments and agencies reported that obligations and expenditures specific to disaster assistance activities are not tracked or cannot be reliably estimated because there is no requirement for state or other recipients of the financial support to indicate whether or how much of the funding or assistance is used for disasters. |
The responsibility for building and maintaining highways in the United States rests with state departments of transportation in each of the 50 states, the District of Columbia, and Puerto Rico. In addition, local governments finance road construction through sources such as property and sales taxes. In 2004, state governments took in about $104 billion from various sources to finance their highway capital and maintenance programs—44 percent of these revenues came from state fuel taxes and other state user fees, and 28 percent came from federal grants. Sources of state highway revenues in 2004 are shown in figure 1. Other (locl contribution, ond proceed, nd micellneous) FHWA administers federal grant funds through the federal-aid highway program and distributes highway funds to the states through annual apportionments established by statutory formulas. Once FHWA apportions these funds, they are available to be obligated for the construction, reconstruction, and improvement of highways and bridges on eligible federal-aid highway routes and for other purposes authorized in law. Within these parameters, responsibility for planning and selecting projects generally rests with state departments of transportation (DOT) and with metropolitan planning organizations, and these states and planning organizations have considerable discretion in selecting specific highway projects that will receive federal funds. For example, section 145 of title 23 of the United States Code describes the federal-aid highway program as a federally assisted state program and provides that the federal authorization of funds, as well as the availability of federal funds for expenditure, shall not infringe on the states' sovereign right to determine the projects to be federally financed. About 5 percent of the highway revenues to the states in 2004 came from tolls. In 2005, the United States had about 5,000 miles of toll facilities in operation or under construction, including about 2,800 miles, or 6 percent, of the Interstate Highway System, according to FHWA. Tolling of roads began in the late 1700s. From 1792 through 1845, an estimated 1,562 privately owned turnpike companies managed and charged tolls on about 15,000 miles of turnpikes throughout the country. Between 1916 and 1921, the number of automobiles in the United States almost tripled, from 3.5 million to 9 million, and as automobile use increased, pressure grew for more government involvement in financing the construction and maintenance of public roads. In 1919, Oregon became the first state to impose a motor fuel tax to finance roadway construction. In 1916, the Federal Aid Road Act provided states with federal funds to finance up to 50 percent of the cost of roads and bridges constructed to provide mail service. This act and its successor, the 1922 Federal Highway Act, prohibited tolling on roads financed with federal funds. In the 1930s and 1940s, President Roosevelt led the thinking for developing a series of interconnected systems of toll roads that crossed the United States, which was the beginning of the idea of an interstate highway system. Then, between 1940 and 1952, 5 states opened such highways, which they financed through tolls. The first of these highways, the Pennsylvania Turnpike, was completed in 1940. During this time, about 30 states considered building toll roads, given the success of Pennsylvania. In 1943, Congress passed an amendment to the Federal Highway Act, directing the Commissioner of Public Roads to conduct a survey for an express highway system and report the results to the President and Congress. However, there was no determination as to how such a system would be funded. President Eisenhower supported a toll system financed with bonds to be paid back with toll revenues until the bonds were paid off, at which time the tolls would be removed. A committee appointed by President Eisenhower also recommended a highway program financed with bonds, but proposed that federal fuel tax revenues, instead of tolls, be used to pay back the bonds. Ultimately, the Federal-Aid Highway Act of 1956 authorized the creation of a Highway Trust Fund to collect federal fuel tax revenues and finance the construction of the Interstate Highway System on a pay-as-you-go basis. The act prohibited tolling on interstate highways and all federally assisted highways; as a consequence, states built few new toll roads while the Interstate Highway System was under construction. However, many of the toll roads built before 1956 were eventually incorporated into the Interstate Highway System, and tolling on these roads was allowed to continue. Tolling was also allowed, on a case- by-case basis under very specific conditions and with a limited federal funding share, for interstate bridges and tunnels. During the 1990s, as interstate construction wound down, states again began considering and implementing tolling. At the same time, some of the federal restrictions on the use of federal funds for tolling began to ease. The Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA) liberalized some of the long-standing federal restrictions on tolling by permitting tolling for the construction, reconstruction, or rehabilitation of federally assisted non-Interstate roadways and by raising the federal share on interstate bridges and tunnels to equal the share provided for other federal-aid highway projects. The 1998 Transportation Equity Act for the 21st Century (TEA-21) established a new pilot program to allow the conversion of a free interstate highway, bridge, or tunnel to a toll facility if needed reconstruction or rehabilitation was possible only with the collection of tolls. The Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU), enacted in 2005, continued all of the previously established toll programs and added new programs. The federal tolling-related programs that have been authorized in surface transportation legislation are shown in table 1. SAFETEA-LU also created a National Surface Transportation Infrastructure Financing Commission to consider revenue sources available to all levels of government, particularly Highway Trust Fund revenues, and to consider new approaches to generating revenues for financing highways. The commission’s objective is to develop a report recommending policies to achieve revenues for the Highway Trust Fund that will meet future needs. The commission is required to produce a final report within 2 years of its first meeting. In addition to SAFETEA-LU’s new tolling provisions and enhancements to existing programs, FHWA offers an innovative credit assistance program, which can be used to develop toll roads, and an experimental program, which can be used to test innovative toll road development procedures. The Transportation Infrastructure Finance and Innovation Act of 1998 (TIFIA) permits FHWA to offer three kinds of credit assistance for nationally or regionally significant surface transportation projects: direct loans, loan guarantees, and lines of credit. Because TIFIA provides credit assistance rather than grants, states are likely to use it for infrastructure projects that can generate their own revenues through user charges, such as tolls or other dedicated funding sources. TIFIA credit assistance is aimed at advancing the completion of large, capital intensive projects— such as toll roads—that otherwise might be delayed or not built at all because of their size and complexity and the financial market’s uncertainty over the timing of revenues from a project. The main goal of TIFIA is to leverage federal funds by attracting substantial private and other nonfederal investment in projects. FHWA has also encouraged experimental projects through the Special Experimental Projects 15 (SEP- 15) program, which is intended to encourage the formation of public- private partnerships for projects by providing additional flexibility for states interested in experimenting with innovative ways to develop projects, according to FHWA officials. SEP-15 allows innovation and flexibility in contracting, compliance with environmental requirements, right-of-way acquisition, and project finance. In addition, the Department of Transportation’s Office of Transportation Policy is proposing a pilot program—the Open Roads Pilot Program—to explore alternatives to the motor fuel tax. Under this pilot, the Office of Transportation Policy is proposing to make funds available to up to five states to demonstrate on a large scale the viability and effectiveness of financing alternatives to the motor fuel tax. Goals of the program would be to: (1) demonstrate whether or not there are viable alternatives to the motor fuel tax that will provide necessary investment resources while simultaneously improving system performance and reducing congestion, (2) identify successful motor fuel tax substitutes that have widespread applicability to other states, and (3) provide a possible framework for future federal reauthorization proposals. As congestion increases and concerns about the sustainability of traditional roadway financing sources grow, tolling has promise as an approach to enhance mobility and to finance transportation. Tolls that are set to vary with the level of congestion can potentially lead to a reduction in congestion and demand for roads. Such tolls can create additional incentives for drivers to avoid driving alone in congested conditions when making their driving decisions. In response, drivers may choose to share rides, use public transportation, travel at less congested (generally off- peak) times, or travel on less congested routes, if available, to reduce their toll payments. Tolling is also consistent with the important user pays principle, can potentially better target spending for new and expanded capacity, and can potentially enhance private-sector participation and investment in major highway projects. Tolling’s promise is particularly important in light of long-term fiscal challenges and pressures on the federal budget. Tolling can be used to potentially enhance mobility by managing congestion, which is already substantial in many urban areas. Congestion impedes both passenger and freight mobility and ultimately, the nation’s economic vitality, which depends in large part on an efficient transportation system. Highway congestion for passenger and commercial vehicles traveling during peak driving periods doubled from 1982 through 2000. According to the Texas Transportation Institute, drivers in 85 urban areas experienced 3.7 billion hours of delay and wasted 2.3 billion gallons of fuel in 2003 because of traffic congestion. The Texas Transportation Institute estimated that the cost of congestion was $63.1 billion (in 2003 dollars), a fivefold increase over two decades after adjusting for inflation. On average, drivers in urban areas lost 47 hours on the road in 2003, nearly triple the delay travelers experienced on average in 1982. During this same period, congestion grew in urban areas of every size; however, very large metropolitan areas with populations of more than 3 million were most affected. (See fig. 2 for examples of congestion growth in selected urban areas.) Freight traffic—which has doubled since 1980 and in some locations constitutes 30 percent of interstate system traffic—added to this congestion at a faster rate than passenger traffic, and FHWA projects continued growth, estimating that the volume of freight traffic on U.S. roads will increase 70 percent by 2020. A number of factors, as follows, are converging to further exacerbate highway congestion: Most population growth in the nation occurs in already congested metropolitan areas. In 2000, the U.S. Census Bureau reported that 79 percent of 281 million U.S. residents lived in metropolitan areas. Nationwide, the population is expected to increase by 54 million by 2020, and most of that growth is expected in metropolitan areas. Vehicle registrations are steadily increasing. In 2003, vehicle registrations nationwide stood at 230 million, a 17 percent increase in just 10 years. Road usage, as measured by vehicle miles traveled (VMT), grew at a steady annual rate of 2.8 percent from 1980 through 2003. For the 10- year period between 1994 and 2003, the total increase in VMT was 22 percent. Road construction has increased at a slower pace than population growth, vehicle registrations, and road usage. For example, from 1980 to 2000, VMT increased by 80 percent while urban lane miles increased 37 percent. In light of this increasing congestion, a tolling structure that includes congestion pricing can potentially reduce congestion and the demand for roads during peak hours. Through congestion pricing, tolls can be set to vary during congested periods to maintain a predetermined level of service. One potential effect of this pricing structure is that the price that a driver pays for such a trip, including the toll, may be equal to or close to the total cost of that trip, including the external costs that drivers impose on others, such as increased travel time, pollution, and noise. Such tolls create financial incentives for drivers to consider these costs when making their driving decisions. In response, drivers may choose to share rides, use transit, travel at less congested (generally off-peak) times, or travel on less congested routes to reduce their toll payments. Such choices can potentially reduce congestion and the demand for road space at peak periods, thus potentially allowing the capacity of existing roadways to accommodate demand with fewer delays. Actual experience with congestion pricing is still fairly limited in the United States, with only five states operating such facilities and six states planning facilities. Some results show that where variable tolls are implemented, changes in toll prices affect demand and, therefore, levels of congestion. For example, on State Route 91 in California, the willingness of people to use the Express Lanes has been shown to be directly related to the price of tolls. A study by Cal Poly State University for the California DOT estimated that a 10 percent increase in tolls would reduce traffic by 7 percent to 7.5 percent, while a 100 percent increase in tolls would reduce traffic by about 55 percent. By adjusting the price of tolls, the flow of traffic can be maintained in the toll lanes so that congestion remains at manageable levels. In the Minneapolis-St. Paul area, a Minnesota DOT study of a proposed system of variable priced HOT lanes called MnPASS estimated that, over time, average speeds and vehicle mileage would increase, while vehicle hours traveled would decrease. By 2010, with tolled express lanes and free HOV lanes, the daily vehicle mileage on the entire system is projected to be 3.6 million compared with 3.2 million if the highways are not tolled. Average overall speed on the system is expected to be 47 mph compared with 42.8 mph if the system is not implemented. Finally, congestion pricing has been in use internationally as well. Canada, Great Britain, Norway, Singapore, and South Korea all have roadways that are tolled to manage demand and reduce congestion. For example, in 1996, South Korea implemented congestion tolls on two main tunnels. Traffic volume decreased by 20 percent in the first 2 years of operation, and average traffic speed increased by 10 kilometers per hour. Although congestion pricing was dismissed by some decision makers in the past partly because motorists queuing at toll booths to pay tolls created congestion and delays, advances in automated toll collection have greatly reduced the cost and inconvenience of toll collection. Today, nearly every major toll facility provides for electronic toll collection, greatly reducing the cost and inconvenience of toll collection. With electronic toll collection, toll fee collection for using a facility can be done at near highway cruising speed because cars do not have to stop at toll plazas. However, as we reported, there are no widely accepted standards for electronic toll systems, which could become a barrier to promoting the needed interoperability between toll systems. Tolling holds promise to improve investment decisions and raise revenues in the face of growing concerns about the sustainability of traditional financing sources for surface transportation. For many years, federal and state motor fuel taxes have been the mainstay of state highway revenue. In the last few years, however, federal and motor fuel tax rates have not kept up with inflation. Between 1995 and 2004, total highway revenues for states grew an average of 3.6 percent per year, with average annual increases of 4.9 percent for federal grants and 3 percent for revenues from state sources, according to FHWA data. However, these increases were smaller than increases in the cost of materials and labor for road construction and are not sufficient to keep pace with the robust levels of growth in highway spending many transportation advocates believe is needed. The federal motor fuel tax rate of 18.4 cents per gallon has not been increased since 1993, and thus the purchasing power of fuel tax revenues has been steadily eroded by inflation. Although the Highway Trust Fund was reauthorized in 1998 and 2005, no serious consideration was given to raising fuel tax rates. Most states faced a similar degradation of the value of their state motor fuel tax revenues—although 28 states raised their motor fuel tax rates between 1993 and 2003, only three states raised their rates enough to keep pace with inflation. State gasoline tax rates range from 7.5 cents per gallon in Georgia to 28.5 cents in Wisconsin. Seven states have motor fuel tax rates that vary with the price of fuel or the inflation rate—including one state that repealed the linkage of its fuel tax rate to the inflation rate effective in 2007. Figure 3 shows the decline in the purchasing power in real terms of revenues generated by federal and state motor fuel tax rates since 1990. Even if federal and state motor fuel tax rates were to keep pace with inflation, the growing use of fuel-efficient vehicles and alternative-fueled vehicles would, in the longer term, further diminish fuel tax revenues. Although all highway motorists pay fuel taxes, those who drive hybrid- powered or other alternative-fueled vehicles consume less fuel per mile than those who drive gas-only vehicles. As a result, these motorists pay less fuel tax per mile traveled. According to the U.S. Energy Information Agency, hybrid vehicle sales grew twentyfold between 2000 and 2005 and will grow to 1.5 million vehicles annually by 2025. In the past five years, hybrid vehicle sales grew in the United States twentyfold, from 9,400 in 2000 to over 200,000 in 2005. Moreover, the U.S. Energy Information Agency projects that hybrid vehicle sales will grow to 1.5 million annually by 2025. Sales of alternative-fueled vehicles, such as alcohol-flexible-fueled vehicles, are projected to increase to 1.3 million in 2030, with electric and fuel cell technologies projected to increase by 2030 as well. As concerns about the sustainability of traditional roadway financing sources grow, tolling can potentially target investment decisions by adhering to the user pays-principle. National roadway policy has long incorporated the user pays concept, under which the costs of building and maintaining roadways are paid by roadway users, generally in the form of excise taxes on motor fuels and other taxes on inputs into driving, such as taxes on tires or fees for registering vehicles or obtaining operator licenses. This method of financing is consistent with one measure of equity that economists use in assessing the financing of public goods and services, the benefit principle, which measures equity according to the degree that readily identifiable beneficiaries bear the cost. As a result, the user pays concept is widely recognized as a critical anchor for transportation policy. Increasingly, however, decision makers have looked to other revenue sources—such as income, property, and sales tax revenues—to finance roads. Using these taxes results in some sacrifice of the benefit principle because there is a much weaker link to the benefits of roadway expenditures for those taxes than there is for fuel taxes. Tolling, however, is more consistent with user pay principles because tolling a particular road and using the toll revenues collected to build and maintain that road more closely link the costs with the distribution of the benefits that users derive from it. Motor vehicle fuel taxes can provide a rough link between costs and benefits but do not take into account the wide variation in costs required to provide different types of facilities (i.e., roads, bridges, tunnels, interchanges) some of which can be very costly. Tolling can also potentially lead to more targeted, rational, and efficient investment by state and local governments. Roadway investment can be more efficient when it is financed by tolls because the users who benefit will likely support additional investment to build new capacity or enhance existing capacity only when they believe the benefits exceed the costs. When costs are borne by nonusers, the beneficiaries may demand that resources be invested beyond the economically justifiable level. Tolling can also provide the potential for more rational investment because, in contrast to most grant-financed projects, toll project construction is typically financed by bonds sold and backed by future toll revenues, and projects must pass the test of market viability and meet goals demanded by investors. However, even with this test there is no guarantee that projects will always be viable. A tolling structure that includes congestion pricing can also help guide capital investment decisions for new facilities. As congestion increases, tolls also increase and such increases (sometimes referred to as “congestion surcharges”) signal increased demand for physical capacity, indicating where capital investments to increase capacity would be most valuable. At the same time, congestion surcharges would provide a ready source of revenue for local, state, and federal governments, as well as for transportation facility operators in order to help fund these investments in new capacity that, in turn, can reduce delays. Over time, this form of pricing can potentially influence land-use plans and the prevalence of telecommuting and flexible workplaces, particularly in heavily congested corridors where external costs are substantial and congestion surcharges would be relatively high. Tolling can also be used as a tool for leveraging increased private-sector participation and investment. In March 2004, we reported that three states—California, Virginia, and South Carolina—had pursued private- sector investment and participation in major highway projects. Since that time, Virginia has pursued additional projects, and Texas has contracted with a private entity to participate and invest in a major highway project. Tolling can be used to enhance private participation because it provides a mechanism for the private sector to earn the return on investment it requires to participate. Involving the private sector allows state and local governments to build projects sooner, conserve public funding from highway capital improvement programs for other projects, and limit their exposure to the risks associated with acquiring debt. Federal and state policymakers have begun looking toward future options for long-term highway financing. For example, SAFETEA-LU established the National Surface Transportation Infrastructure Financing Commission to study prospective Highway Trust Fund revenues and assess alternative approaches to generating revenues for the Fund. SAFETEA-LU also authorized a study, to be performed by the Public Policy Center of the University of Iowa, to test an approach to assessing highway use fees based on actual mileage driven. This approach would use an onboard computer to measure the miles driven by a specific vehicle on specific types of highways. A few states have also begun looking toward the long-term financing options. Oregon, the first state to enact a motor fuel tax, is sponsoring a study on the technical feasibility of replacing the gas tax with a per-mile fee. During 2006, volunteers will have onboard mileage-counting equipment added to their vehicles and will, for one year, pay a road user fee equal to 1.2 cents a mile instead of paying the state’s motor fuel tax. But beyond the questions of financing and financing sources, broader issues and challenges exist. As the baby boom generation ages, mandatory federal commitments to health and retirement programs will consume an ever-increasing share of the nation’s gross domestic product (GDP) and federal budgetary resources, placing severe pressures on all discretionary programs, including those that fund defense, education, and transportation. Our simulations show that by 2040, revenues to the federal government might barely cover interest on the debt—leaving no money for either mandatory or discretionary programs—and that balancing the budget could require cutting federal spending by as much as 60 percent, raising taxes by up to 2 ½ times their current level, or some combination of the two. As we have reported, this pending fiscal crisis requires a fundamental reexamination of all federal programs, including those for highways. This reexamination should raise questions such as whether a federal role is still needed, whether program funding can be better linked to performance, and whether program constructs are ultimately sustainable. It is in this context that tolling has promise for addressing the challenges ahead. In particular, we have suggested that a reexamination of the federal role in highways should include asking whether the federal government should even continue to provide financing through grants or whether, instead, it should develop and expand alternative mechanisms that would better promote efficient investments in, and use of, infrastructure and better capture revenue from users. According to our survey of state transportation officials, there are toll road facilities in 24 states and plans to build toll road facilities in 7 other states. Tolling grew in the 1940s and 1950s, but after a period of slower growth, states’ tolling began to expand again in the 1990s. The 5 states that began tolling after 1990 are currently planning additional toll roads. Officials in states that have toll roads or are planning toll roads indicated that their primary reasons for using or considering the use of a tolling approach were to address transportation shortfalls, finance new capacity, and manage congestion. Transportation officials in some states, however, told us that tolling is not now seen as feasible because there is little need for new tolled capacity, tolling revenues would be insufficient, and they would face public and political opposition to tolling. Currently, there are toll road facilities in 24 states throughout the United States, and there are plans to build toll road facilities in 7 additional states. Figure 4 shows the states that have at least one existing toll road, according to our survey of transportation officials from all 50 states and the District of Columbia and our review of FHWA toll-related programs. (See app. III for the survey questions.) Tolling grew in the 1950s, slowed for several decades, and again began to expand rapidly in the 1990s. Five states—California, Colorado, Minnesota, South Carolina, and Utah—opened their first toll roads from 1990 to 2006 and, according to our survey of state transportation officials, all five are currently planning, or in some stage of building, at least one new toll road. Large states that have recently built toll roads, such as California, Florida, and Texas, are also moving ahead with plans to build and expand systems of tolls. In Texas, for example, the DOT’s Turnpike Authority Division is developing a proposed multiuse, statewide network of transportation routes that will incorporate existing and new highways called the TTC, while three other regional toll authorities in Austin, Dallas, and Houston are also planning toll roads. In California, a state legislative initiative in 1989 led to the development of toll roads in Orange County, including the State Route 91 Express Lanes and State Route 125 in San Diego. And in Florida, the DOT-run Florida Turnpike Enterprise operates nine tolled facilities that include almost 500 miles of toll roads and is studying the feasibility of implementing tolling to manage congestion on other facilities, including Interstate 95 in Miami-Dade County. According to our survey of state transportation officials and our review of state applications to FHWA tolling pilot programs, a total of 23 states have plans to build toll road facilities. (Fig. 5 summarizes the status of states’ plans for highway tolling.) Eleven of these states have received the required environmental clearances and have projects that are under design or in construction. The remaining 12 states do not have projects that have proceeded this far, but do have plans to build toll road facilities, according to their respective state transportation officials. Of these 23 states, 16 have existing toll roads and are planning additional toll roads, and 7 are planning their first toll roads. Officials in most states planning toll roads indicated that the primary reasons for considering a tolling approach were to address what state officials characterized as transportation funding shortfalls, to finance and build new capacity, and to manage congestion. States that are not planning to build toll roads have found that tolling is not feasible or have made other choices. Transportation officials indicated that one of the primary reasons for using or considering a tolling approach was to respond to what the officials described as shortfalls in transportation funding. In Georgia, for example, an official told us that tolling has become a strategy because there is a significant gap in transportation funding, and the motor fuel tax rate is the lowest in the country, 7.5 cents per gallon. In North Carolina, where the North Carolina Turnpike Authority was established in 2002, an official told us that traditional funding is not adequate to address transportation needs. North Carolina has estimated that, over the next 25 years, it will need $85 billion in new transportation projects to accommodate the state’s growth. With a projected shortfall of $30 billion and what the official described as a lack of political will to increase motor fuel tax rates, the state has adopted tolling as one strategy to address transportation needs. In Utah, state transportation officials have estimated a $16.5 billion shortfall through 2030 in funding for highway projects and are considering tolling, along with other funding alternatives. Finally, an official told us that, in spite of a motor fuel tax rate increase in 2003 and a $200 million bonding program, Indiana has a 10-year, $2.8 billion shortfall in highway funding and is viewing tolling as one financing tool to close the gap. The Indiana DOT has operated the Interstate 80/Interstate 90 Indiana Toll Road for 50 years and would like to apply that experience in operating toll roads to new roads. In other states, transportation officials conducted financial assessments on specific highway projects and determined that, to complete the projects, tolling would be required as a source of revenue. For example, in Missouri, a funding analysis performed by the Missouri DOT found that the estimated construction costs for the Interstate 70 reconstruction exceed the available federal, state, and local funding sources, and the project cannot be advanced without tolling or other revenue increases. Missouri DOT estimates that the Interstate 70 reconstruction project will cost between $2.7 and $3.2 billion and that, with a current funding shortfall of $1 billion to $2 billion annually, tolling is being actively considered to close that gap. Likewise, studies by the Texas DOT determined that tolling would be required on particular highway projects. For example, reconstructing a 23-mile portion of Interstate 10 near Houston was estimated to cost $1.99 billion. Available federal, state, and local funds amounted to $1.75 billion, a shortfall of $305.2 million. The Harris County Toll Road Authority invested $238 million for the right to operate tolled lanes within the facility. In addition, the Texas Transportation Commission, which oversees the state DOT, ordered that all new controlled-access highways should be considered as potential toll projects that will undergo toll feasibility studies. The commission views tolling as a tool that can help stretch limited state highway dollars further so that transportation needs can be met. Moreover, states are looking for whatever financial relief tolling can provide. In some states, tolling is being considered, even though toll revenues are expected to only partially cover the costs of particular projects. In Mississippi, for example, the state DOT indicated that tolling may be advanced if toll revenues cover 25 percent to 50 percent of a facility’s cost. In Arkansas, tolling is being considered if toll revenues fund as little as 20 percent of the initial construction costs, provided tolls pay for operations and maintenance. To identify state characteristics that are linked with state decisions to toll, we performed a correlation analysis to examine the relationship between those decisions and various state demographic and financial characteristics. Although certain characteristics in a state’s finances and tax policies might be related to financial need, our correlation analysis found only limited relationships between various state financial and demographic measures and states’ decisions to toll or not to toll. For example, although we found a slight inverse relationship between a state’s decision to toll and the level of its motor fuel taxes, this relationship is not strong enough to conclude that states planning toll roads are more likely to be the ones with lower motor fuel tax rates than other states. However, we found that both the size of the state, whether measured by population or by VMT, and whether it is growing rapidly, again measured by population or VMT growth, are directly related to states’ decisions to toll. (For more information on the results of our correlation analysis, see app. II.) According to transportation officials, states are using or considering a tolling approach to finance new capacity that cannot otherwise be funded under current and projected transportation funding scenarios. Such new capacity may be in the form of new highways or new lanes on existing highways. For example, in Colorado, the state DOT is studying the investment of $3 billion in increased highway capacity, with 10 percent, or $300 million of the investment, coming from federal, state, and local governments and the remainder coming from tolls. With a $48 billion shortfall projected through 2030 and the percentage of congested lane- miles projected to increase by 161 percent, tolling is being considered. Projections by the state DOT in Colorado suggest that revenues are sufficient to allow for only spot improvements on a few transportation corridors over the next 25 years and, without tolling, none can undergo a major upgrade, and new capacity cannot be added. Some states are using tolling to supplement their traditional motor fuel tax transportation funding through private-sector involvement and investment. Tolling is being used as a means to gain access to private equity and to shift the investment risk, in part, to the private sector. Currently, 18 states have some form of public-private partnership (PPP) legislation, allowing for innovative contracting with the private sector. Many of the 18 states have PPP programs that were established to allow for toll concession agreements to finance highway projects. For example, Oregon and Texas are specifically looking to attract private investment as a new source of financing. The TTC, as shown in figure 6, is being financed, in part, through a series of PPPs. The Texas DOT has contracted with Cintra- Zachry to develop a long-term development plan for the corridor, which includes the potential to construct and operate the first 316-mile portion of TTC 35, from Dallas to San Antonio. Cintra-Zachry has pledged an investment of $6 billion and a payment of $1.2 billion for the right to build, operate, and collect tolls for up to 50 years on the initial segment of TTC 35. In Oregon, the Office of Innovative Partnerships and Alternative Funding— an Oregon DOT office empowered to pursue alternative funding, including private investment through tolling—has received proposals from the Oregon Transportation Improvement Group, a consortium led by the Macquarie Infrastructure Group, to complete two tolled facilities in the Portland area. In both Texas and Oregon, the projects were approved under SEP-15, which enabled the two states to waive certain federal requirements and to negotiate with the project developers before awarding contracts. Acceptance of the projects under SEP-15 does not commit federal-aid funding for the projects, and FHWA retains the right to declare the project ineligible for federal-aid funds at any time during the SEP-15 process until there is formal FHWA project approval. Growing freight traffic is also prompting some states to consider using tolls to pay for capacity enhancement. Examples include Interstate 81 in Virginia and Interstate 70 in Missouri. According to the original design of Interstate 81, built beginning in 1957, truck traffic would account for 15 percent of traffic on the highway; truck traffic now accounts for up to 35 percent, and traffic levels are expected to double by 2035. Interstate 70, originally designed to carry up to 14,000 vehicles per day in rural areas, now carries up to 58,000 per day, and truck traffic, which was intended to be 10 percent of total traffic, is now 25 percent. Both interstates are major freight routes where truck traffic is expected to continue to increase. In 2003, the Virginia DOT received FHWA conditional provisional approval under the Interstate System Reconstruction and Rehabilitation Pilot Program to toll vehicles other than cars and pickup trucks (freight trucks and buses) on Interstate 81. Likewise, for Interstate 70, the Missouri DOT received conditional provisional approval in July 2005 to participate in the same pilot program. In certain cases, proposals for truck-only toll (TOT) lanes seek to manage congestion while increasing capacity by diverting trucks from passenger routes to dedicated lanes. TOT lanes are being considered on heavy freight routes, including Interstate 81 in Virginia, TTC in Texas (see fig. 7), and routes throughout the Atlanta Metropolitan Region in Georgia. While growing congestion and traffic volumes have increased the demand for additional highway capacity, transportation officials told us that tolling is being considered as a tool to manage congestion. Applying tolls that vary with the level of congestion—congestion pricing—can reduce congestion and the demand for roads because tolls that vary according to the level of congestion can be used to maintain a predetermined level of service. Such tolls create additional incentives for drivers to avoid driving alone in congested conditions when making driving decisions. In response, drivers may choose to share rides, use public transportation, travel at less congested (generally off-peak) times, or travel on less congested routes, if available, to reduce their toll payments. Tolling for congestion management can take the form of HOT lanes, which are adjacent to nontolled lanes. HOT lanes are used to manage congestion by creating a tolling structure that varies toll prices according to the level of congestion. Such a tolling structure can reflect the external costs that users of the facility impose on others. In some cases, HOV lanes that had been underused have been converted to HOT lanes, allowing HOVs to continue to use the lane as an HOV lane but allowing single-occupancy vehicles to use the lane provided they are willing to pay a toll. In 5 of the 23 states planning toll roads, efforts to manage congestion on existing capacity is prompting tolling. California, Colorado, Texas, Virginia, and Washington all have HOT lane projects planned that will use variably priced tolls to alleviate congestion by managing the level of traffic. All of these states have received grants under FHWA’s Value Pricing Pilot to either develop or implement the projects. In California, the State Route 91 Express Lanes, as shown in figure 7, opened in 1995, and the Interstate 15 Express Lanes, opened in 1998, have dedicated, tolled lanes where the flow of traffic is managed through toll prices that vary daily and hourly. Tolls on State Route 91 range from as little as $1.10 to as much as $8.50. During periods of heavier demand and congestion, toll prices are higher so that fewer people will use the lanes, and a consistent flow of traffic can be maintained. In Texas, the Katy Freeway in Houston was originally designed to carry 80,000 vehicles per day. With traffic now exceeding 200,000 vehicles per day, the Texas DOT, in cooperation with FHWA, opened HOV-3 lanes (lanes that could only be used by carpools of 3 or more passengers) to vehicles with two passengers who pay a toll as express toll lanes in 1998. Texas DOT is also building managed lanes, scheduled to open in 2009, that will have peak toll pricing between 6:00 a.m. and 11:00 a.m. and between 2:00 p.m. and 8:00 p.m. The result, in both cases, is a system in which commuters pay a toll for access to less congested lanes. More recently, in Minnesota, where Minneapolis and St. Paul have been experiencing rapid growth in congestion and, according to the Minnesota DOT, HOV lanes were underused, the state legislature authorized the conversion of the Interstate 394 HOV lanes to HOT lanes. The Interstate 394 MnPASS optional toll lanes project opened in May 2005 with “dynamic pricing” to adjust tolls from anywhere from 25 cents to $8.00, according to traffic levels. In some states, tolling or variable pricing—in which toll rates differ depending on conditions such as the time of day or location—is used specifically to manage freight congestion. In October 2005, for example, the Delaware DOT launched an initiative designed to address problems with freight congestion on the Delaware Turnpike (Interstate 95) by encouraging trucks to travel at night. Tolls on trucks between 10:00 p.m. and 6:00 a.m. are 75 percent less than tolls during more congested daytime hours. Another effort that incorporates variable pricing, but is not a traditional form of facility-based tolling, is a road user fee system that is being developed with an FHWA Value Pricing Pilot grant by the Oregon Office of Innovative Partnerships and Alternative Funding in cooperation with Oregon State University. The system assesses mileage-based fees in place of motor fuels taxes, and the fees vary for miles traveled during rush hour and within cordoned downtown areas. The reason most frequently cited by state transportation officials for not tolling is that tolling is not feasible. More specifically, there is little need for new tolled capacity, tolling revenues would be insufficient, or there is public and political opposition to tolling as follows: Little need for new tolled capacity. Transportation officials in many states indicated that low traffic volumes, a lack of congestion, and low demands for additional capacity make tolling impractical. In states such as Montana, North Dakota, South Dakota, and Wyoming, the population density and percentages of urban vehicle miles traveled are too low to support tolling. Insufficient revenues. In some states, tolling is not considered because toll revenues would not cover the costs of projects. In some cases, the issue involves traffic volumes that are so low that a tolling approach would be impractical. In those states, transportation officials explained that even if a tolling approach were to be considered, tolls would have to be prohibitively high to fund capacity enhancements and would likely result in traffic diversion to nontolled, alternative routes. For example, a transportation official in Kansas told us that there are few routes in Kansas that have a high enough level of traffic to make them viable for tolling. Therefore, opportunities for tolling are limited under the classic definition of feasibility, for which toll revenue must be adequate to fund construction, maintenance, and operations of a facility. Under this definition, most roads would not generate sufficient revenues from tolls to fund new highway capacity. In other cases, where traffic volumes are higher, transportation officials told us that a tolling approach is not even considered unless it can be demonstrated that the project will be self-sustaining. In Massachusetts, for example, an evaluation of HOT lanes determined that the toll rates people would be willing to pay would not raise enough revenue to fund the capital expenses to construct the facility. Public and political opposition. Officials from many states that are not pursuing tolling mentioned some form of public or political opposition to toll roads that has dissuaded transportation professionals from pursuing tolling. The public or political opposition is so strong, according to officials in some states, that tolling is studied only with great caution and sensitivity, if at all. While some states mentioned the lack of a tolling culture as a reason for not tolling, other states that have tolled roads for years cited the long-standing presence of toll roads as a reason for not planning to expand tolling. In New Jersey, New Hampshire, and Ohio—states with long-established toll roads—state officials said the presence of tolls has instilled public opposition to them. For example, New Jersey officials told us that opposition to new toll roads is strong because many state border crossings and major highways are already tolled. In other cases, DOTs face political opposition to tolling. In Mississippi, where other toll projects are still being considered, the state DOT withdrew its application to toll Interstate 10 under the Interstate System Reconstruction and Rehabilitation Pilot Program in response to political opposition. States that do not toll and are not planning to construct toll roads have also chosen options other than tolling to finance highway construction and maintenance. For example, 13 states that are not planning toll roads have used “GARVEE bonds” as grant anticipation financing to borrow funds and pledge future federal-aid highway revenues for repayment. South Carolina used state infrastructure bank loans and federal credit assistance, along with state and local funds, for its “27 in 7 Accelerated Program” through which it is completing $5 billion worth of highway infrastructure capacity and improvements in 7 years, compared with the 27 years it estimated would be needed under conventional financing means. A smaller number of states, such as Iowa and Tennessee, have remained committed to a primarily pay-as-you-go approach, building new capacity only when money becomes available through motor fuel tax revenues or other state revenues. Drawing on our analyses of states’ experience with tolling and on a review of selected published research on tolling, we have identified two broad types of challenges that transportation officials have encountered when attempting to implement tolling: (1) the difficulty of obtaining political and public support in the face of opposition from the public and political leaders and (2) the difficulty of implementing tolling given a lack of, or overly restrictive, enabling toll legislation; concerns about potential traffic diversion resulting from toll projects; and a need to coordinate with other states and regions when toll projects cross jurisdictional boundaries. (See fig. 8.) While these two broad types of challenges may make a tolling approach difficult to adopt or implement, states have nevertheless identified specific ways to resolve or mitigate the challenges. We discuss the strategies that states have used to address tolling challenges in the following section of this report. State transportation officials who are implementing or are considering implementing tolling say that garnering political and public support is perhaps the greatest challenge to tolling. Some studies have also reported this challenge. For example, in a recently issued report, the Transportation Research Board cited studies that identified the unpopularity of toll roads and public skepticism as fundamental obstacles to employing a tolling approach. The report identified the inconvenience of paying tolls, being forced to pay twice, and inequities that a tolling approach would produce as the most commonly expressed objections. The Congressional Budget Office noted in its report that opponents of toll roads often charge that such roads are unfair to motorists with low incomes who may not be able to afford them. This concern is intensified if it involves trips to work and the motorist has few alternatives. In a policy brief issued by the Brookings Institution, the author notes that a drawback of tolls is that people think these tolls would be just another tax, forcing them to pay for something they have already paid for through gasoline taxes. We have also noted in prior work that political opposition to tolling has been substantial because of concerns about equity and fairness. According to our analysis, a number of factors influence public and political perceptions about tolling. (See fig. 9.) Double taxation arguments. The most frequent objection to tolls is the argument that motorists traveling on toll roads are being asked to pay twice; that is, a new roadway toll is being levied in addition to existing taxes. States have a number of dedicated sources of revenues that are used to finance highway capital programs. According to transportation experts, the public generally believes that transportation costs are already being paid for through motor fuel, property, and sales taxes, as well as license and registration fees, and in the case of trucks, special tire taxes and weight-distance fees. Therefore, new road user fees, such as tolls, are often viewed as new taxes. Transportation officials in a number of states reported that concerns about double taxation limited their consideration of a tolling approach to varying degrees. In Wisconsin, where tolling is not being implemented, a transportation official told us that the public understands that the fuel excise tax and other user fees are used to fund highway construction. Therefore, the public would view tolling as another tax being imposed on them. This type of concern can be compounded when tolls are being proposed for an existing facility. For example, in Missouri, consideration of tolling to pay for Interstate 70’s reconstruction faced opposition, in part, because the public believes that the interstate highway has already been paid for, according to state DOT officials. Missouri citizens generally regard tolls as government’s way of making users pay again, according to state transportation officials. This view is supported by Missouri’s history of commitment to free roads. Citizens in Texas have voiced similar arguments against tolls. In Houston, for example, plans to convert State Highway 249 to a tolled road have met with some resistance on the grounds of double taxation. At the public hearing organized to hear views on the conversion, officials estimated that an overwhelming majority of those in attendance were against the conversion because they felt that the road had already been paid for. Strong opposition can arise even before a roadway has been completed. A proposal to toll a nearly completed portion of U.S. Route 183 north of Austin was retracted after citizens expressed strong opposition. Transportation officials told us that these citizens believed that since the road was nearly complete, introducing tolls would amount to double taxation. Projects are not self-sustaining. Transportation officials told us that they often find it difficult to demonstrate that tolling is reasonable and necessary because revenues collected from toll projects usually do not fully cover project costs. In Oregon, for example, a financial analysis of toll proposals indicated that the proposals under consideration would not be economically feasible through the collection of tolls alone, according to transportation officials. A private consortium was selected to negotiate with the state DOT for the purpose of advancing the projects. However, in the view of this consortium, the new toll road can be financially viable only if existing parallel roads are tolled. In some states, transportation officials stated toll projects are not even considered unless it can be demonstrated that the project will be self-sustaining. In Kentucky, for example, a transportation official told us that traffic volumes alone can rarely financially sustain rural roads through tolling and emphasized that it would be difficult to garner public support for a toll project that required partial subsidization. A transportation official in Arkansas told us that while tolling is considered if revenues fund at least 20 percent of initial construction costs, a toll project would only be considered if it can be shown that toll revenues would cover all operations and maintenance costs. In Florida, toll proposals must pass a financial feasibility test and prove that the proposed projects will be self-sustaining before the projects are further considered for advancement. According to Florida Turnpike Enterprise officials, the standard for feasibility is that by the twelfth year of operation, projected revenues must cover at least 50 percent of operating costs and debt service and by the twenty-second year of operation, projected revenues must cover all costs and debt service. Concerns about inequities. Another objection to the use of tolling involves concerns about the inequities that the approach would produce. According to our review, groups that could be adversely affected by tolling often object, as follows, on the basis of geographic inequity, income inequity, and user inequity: Geographic inequity. Concerns about geographic inequity reflect the belief that certain regions will benefit disproportionately from a tolling approach while other regions will be unfairly disadvantaged. Using a tolling approach to address a transportation need in one part of a state might free up federal and state funding that might have otherwise been used to address that need. This available federal and state funding could then be used to support roads in another part of the state, creating an unfair burden on those motorists that are being tolled. In Florida, for example, there have been concerns about the distribution and use of funds collected for projects in one region (southern Florida) being distributed to and used for projects in another region of the state (northern Florida). In the 1990s, three southern counties—Palm Beach, Broward, and Dade—secured legislation that would require the Florida Turnpike Enterprise to calculate the dollar amount collected in those counties and determine how much of that amount was returned to the counties to be used on their facilities. As a result, the Florida Turnpike Enterprise created a formula to implement that law, reflecting the need to balance collections in those counties with what is being spent on facilities in those counties. Income inequity. Concerns about the unequal ability of lower- income and higher-income groups to pay tolls are often cited by transportation experts as an important political barrier to the acceptance of a tolling approach. Those opposing tolls on the basis of income inequity argue that since tolls would represent a higher portion of the earnings of lower-income households using the tolled road, tolling imposes a greater financial burden on them and, therefore, is unfair. In Maryland, this concern resulted in removing HOT lanes from consideration in state transportation plans, according to FHWA. In June 2001, the governor decided to remove HOT lanes from the state transportation plan because of the perceived inequity of linking an easier commute with a person’s ability to pay. However, in the following year, the governor’s office initiated a revised feasibility study of value pricing that included investigating and addressing the equity issues that were raised earlier, while encouraging the air quality and congestion relief benefits of HOV lanes. User inequity. User inequity involves the belief that some classes of system users are being unfairly disadvantaged. The trucking industry, freight industry, and businesses may view tolling in this light. For example, a transportation official in Virginia told us that a proposal to toll only trucks on Interstate 81 is generally viewed by truckers as an unfair burden being imposed on them. This transportation official also noted that if the proposal is implemented, truckers will seek alternative routes to avoid the tolls. In Missouri, officials representing fuel marketers, fuel retailers, gas stations, and convenience stores told us that they consider a tolling proposal unfair. According to the industry officials, these businesses have spent millions of dollars on their exit locations along the interstate and believe they have paid their fair share of taxes. Consideration of a tolling approach to enhance mobility on the interstate could potentially have an adverse impact on these businesses because some customers may choose alternative routes. General views on government. According to transportation officials with whom we spoke, public opposition to tolling can be exacerbated by a mistrust of government generally. They said that when government proposes tolls as a way to finance transportation, the public generally considers the tolls as a new tax. This mistrust can also be directed specifically at state transportation departments. For example, in 1992, the Missouri DOT proposed a 15-year plan that included a number of promised projects that would be undertaken with an increase in the state’s gas tax. However, according to state transportation officials, after gaining support for the increase, the state DOT did not deliver the promised projects as scheduled. These officials said this failure to deliver contributed to the public’s mistrust of the DOT and its resistance to attempts by the DOT to secure toll authority over the years. In some states, concerns about the cost and management of major highway and bridge programs have reflected dissatisfaction with the performance of state transportation departments. For instance, as we reported in 2002, a legislative commission in Virginia reported on cost overruns and schedule delays in the state’s highway program in 2000 and found that cost estimates prepared for projects were substantially below the final costs. This commission identified a potential funding gap of around $3.5 billion in the state’s $9 billion, 6-year transportation plan. Such concerns about past performance can present challenges for transportation officials who are attempting to advance a tolling approach. Mistrust can also extend to private entities involved in toll road development. As we reported in 2004, states engaging private-sector sponsorship and investment can relinquish political control over their ability to set toll rates and to carry out infrastructure improvements on competing publicly owned roadways. For example, California could not make any improvements along State Route 91—a project privately financed with a combination of equity, bank, and institutional debt—until the year 2030 because a noncompete clause created a 1.5 mile protection zone along each side of the corridor. According to officials from the Orange County Transportation Authority, public pressure on the state DOT to improve the nontolled portion of the road motivated the county to purchase the road back from the private consortium. In some states, transportation officials told us that they face challenges in implementing toll projects. (See fig. 10.) We identified the following three implementation-related challenges: Secure legislative authority to toll. Address the impact of traffic diversion caused by tolling. Coordinate with other states or regions. Secure the authority to toll. Not having, or having restrictions built into, enabling toll legislation poses a challenge for some transportation officials as they develop tolling options. They told us that limited legislative authority for tolling hampered their ability to consider a full range of options to address the transportation needs in their states. Ultimately, these transportation officials sought methods other than tolling to address transportation needs or delayed the development of an identified toll project as they pursued tolling legislation. Missouri’s experience illustrates the challenges transportation officials face when the state DOT does not have the statutory authority to use a tolling approach to advance a project. State transportation officials are considering turning Interstate 70 into a toll road to finance capacity improvements. However, voters would first have to approve an amendment to the state constitution to put toll roads under the state DOT’s jurisdiction—a measure that voters rejected in 1970 and 1992. To avoid another rejection, state DOT officials are exploring alternative financing methods under existing authority, including the use of a nonprofit corporation to build, operate, and maintain the toll project. Transportation officials emphasize, however, that under this option, the entity would not be able to spend state highway revenues for the project—the same restriction that would prevent the state DOT from advancing a toll project—because state funds can be used only for the purposes enumerated in the state constitution, and toll roads were not one of those purposes. Restrictions in enabling legislation can also hamper attempts to implement toll projects. For example, in the mid-1990s, the Minnesota legislature authorized a study of public-private partnerships and tolling as one approach to address congestion and leverage state transportation investments. In conjunction with that study, the state DOT requested public-private partnership tolling proposals and received five proposals in response from private firms. Ultimately, the state DOT recommended a proposal to build Trunk Highway 212 as a toll facility and proceeded to complete a development agreement with a private partner. However, the proposal was vetoed under the provision of the enabling legislation that gives veto authority to local units of government affected by a project. As a result, the Trunk Highway 212 project is now being completed under traditional methods and, according to transportation officials, is taking longer to complete due to funding limitations. New legislation, passed in 2003, eliminated the local veto authority for converting HOV lanes to HOT lanes on existing facilities, giving transportation officials more flexibility to implement a tolling approach. This legislation, which followed a DOT study of HOV lane usage on Interstate 394, authorizes the conversion of the HOV lanes on Interstate 394 to HOT lanes to improve their efficiency. Subsequently, through a design-build-operate agreement, a private partner was secured to bring resources to the table and run the operation. Address concerns about traffic diversion. Traffic diversion resulting from tolling may adversely affect people, municipalities, and businesses. Concerns about such diversion have surfaced in comments by municipalities, businesses, and the trucking industry on a proposal to toll trucks on Interstate 81 in Virginia, according to state transportation officials. Affected Virginia municipalities have suggested, for example, that trucks will leave the interstate to avoid tolls and wind up on local roads. Such diversion has the potential to create congestion, increase accident and fatality rates, and increase the municipalities’ costs of maintaining these roads. The affected municipalities have also expressed concerns about the potential negative effects on economic development that may result from the loss of business along toll routes. According to a recently completed study that considered a number of different proposals, traffic diversion is likely to occur if Interstate 81 is tolled. The study estimated that up to one in four trucks would divert to nearby parallel routes if a high toll rate was applied to commercial vehicles. Concerns about traffic diversion are not limited to new toll roads. The Ohio Turnpike opened in the 1950s and, in the 1990s, traffic studies revealed that commercial vehicles were increasingly diverting onto parallel untolled roads, creating safety and other concerns. In response, the governor released the Northern Ohio Freight Strategy in October 2004, which included a policy to reduce tolls on commercial vehicles in order to redirect traffic back to the Ohio Turnpike. Subsequent traffic studies revealed this strategy was mostly successful. Coordinate on projects that involve multiple states or jurisdictions. Coordination among states and regional jurisdictions is likely to become a growing issue because increasing traffic congestion in metropolitan areas is likely to require regional solutions. Without good coordination with neighboring jurisdictions, individual jurisdictions may find it difficult to solve traffic congestion. Even if one jurisdiction manages to reduce congestion within its system, it may simply shift that congestion to an adjacent jurisdiction. Yet numerous factors could make coordination difficult. For example, the need for coordination is especially critical if states adopt separate tolling legislation with varying, perhaps incompatible, provisions and begin tolling. Other potential challenges include ensuring the interoperability of toll collection facilities when toll proposals involve more than one state, addressing differences in state toll legislation, and mitigating geographic inequities by fairly apportioning the anticipated benefits and disadvantages of toll projects among all stakeholders. Oregon’s experience illustrates how some of these issues might present challenges for transportation officials who are attempting to advance interstate toll projects. Oregon officials cited differing statutory authorities between Oregon and neighboring Washington as a potential coordination issue. In Oregon, transportation officials have the authority to enter into PPPs when advancing a tolling approach, while their counterparts in Washington do not yet have the authority to do so if proposals for partnerships are unsolicited. As a result, stakeholders involved with the Columbia River Crossing Project on Interstate 5, which Oregon officials are attempting to pursue as a toll project with a private partner, are seeking to promote legislation in both states that will provide explicit authorization to advance the project. The Oregon officials also noted that coordination would be necessary to address geographic inequities that might arise from the project, explaining that more of the toll revenues could come from Washington, since motorists there commute into Portland. Transportation officials in both states will need to take this into account as they work towards an equitable apportioning of the project’s costs and benefits. As shown in figure 11, our review of state practices in implementing tolling suggests three broad strategies that can help transportation officials address challenges to its adoption and implementation. These strategies have both short-term and long-term relevance for states as they consider new transportation finance options to supplement traditional approaches. Transportation officials in states that are currently implementing or considering tolling as a means to raise revenue or mitigate congestion can consider these strategies in the short term to build support and smooth implementation of the tolling approaches under consideration. In the longer term, transportation officials in states that are not currently tolling, but choose to begin to do so, can consider these strategies to build support for tolling. The first strategy that transportation officials can consider involves developing an institutional framework that facilitates tolling. In developing such a framework, transportation officials can consider building support for a tolling approach with the public and decision makers in the state and securing enabling tolling legislation. (See fig. 12.) Developing such a framework through these two means involves identifying and articulating the goals to be achieved by the tolling approach in the context of larger state policy goals. Building support. Building support for a tolling approach includes two interim steps—establishing a rationale for tolling and defining the underlying motivations for its use. Together, these steps provide a basis for gaining political and public support before seeking and securing adequate tolling legislation. Establishing a solid rationale for tolling involves linking the specific reasons, or goals, for tolling with state policy goals for transportation. For example, linking a tolling goal, such as managing congestion, to a broader state goal, such as using existing infrastructure more efficiently, can provide a basis for its use. Similarly, a tolling goal of supplementing transportation funding with new revenues could contribute to a broad state policy goal of funding investment in transportation systems with revenues generated directly from users. Articulating the underlying motivations for using a tolling approach can also help transportation officials build support for and accomplish broader transportation goals and tailor tolling goals to accomplish those ends. For example, consideration of a tolling approach might be motivated by a desire to accomplish other goals, such as finding a replacement for the gas tax or attracting private investment for transportation. Irrespective of the motivations that guide the development of the goals, advocates of tolling have to make a compelling case for its use to build public acceptance for it and make it politically viable. Goal setting can help transportation officials articulate the motivations for using the approach, identify the goals to be achieved by its use, and demonstrate how the tolling goals will tie into broader state goals. Such a process can help decision makers formulate a transparent and comprehensive rationale for the use of tolling and gain public and political support for it. Secure legislative authority. Securing tolling legislation is the next step in developing an institutional framework for tolling. Although there are common reasons for tolling, the form legislation takes in each state often depends on the motivations for using the approach and ultimately the goals to be achieved through it. Our review of legislative efforts in Texas, Virginia, Oregon, and Florida illustrates how legislation evolved in response to different motivations and tolling goals. Following are some of these legislative efforts: Leveraging transportation dollars. Texas enacted legislation that provided for a broader application of tolling than currently existed and established a funding mechanism that supported a broader use of tolls in the state’s transportation system. This legislation facilitates tolling by realizing two goals—to expand the use of tolling and to leverage tax dollars by allowing state highway funds to be combined with other funds to build toll roads. This combination of funds makes toll roads more feasible, since the entire cost of the project does not have to be repaid with tolls. Virginia’s Public-Private Transportation Act of 1995 (PPTA) allows qualifying local governments and certain other political entities to enter into agreements authorizing private entities to acquire, construct, improve, maintain, and operate qualifying transportation facilities. The public entities may either solicit or accept unsolicited proposals from private sources. Private- sector sponsorship and investment in transportation projects could help states realize both an established tolling goal to accelerate project delivery and a goal to leverage tax dollars by securing private investment in transportation projects. Operating like a business. In some cases, there is a motivation to “reinvent government” by operating in a more businesslike manner. Public agencies of all types have pursued innovation and best practices found in the private sector to improve the cost- effectiveness and timeliness of product delivery. A goal that embodies these motivations can take many forms in legislation. In Florida, for example, legislation was passed in 2002 that turned the Florida Turnpike, operated by the Florida DOT, into a business organization as a way to preserve, improve, and expand the turnpike system. State decision makers were interested in operating the turnpike as a business for the state and employing private-sector methods in the areas of management, finance, organization, and operation. The goals for the enterprise are to increase revenues, expand capital program capabilities, and improve customer service. Transitioning to a new system of transportation finance. The sustainability of the current financing system has been called into question, and as we have reported, a fundamental reexamination of the present system will be necessary to increase the cost- effectiveness of spending and to mitigate congestion. Some transportation experts believe that shifting to a fee structure that more directly charges vehicle operators for their actual use of roads would improve the operation of the road system and better target investment. For example, Oregon’s efforts to explore mileage charges provide some insights into how legislation can be developed to carry out such an ambitious goal. A road user fee proposal, passed by the state legislature in 2001, created a user fee task force to design a method of charging drivers for their use of the state’s roads as an alternative to the current system of gas taxes. The task force proposed the eventual imposition of a mileage fee in place of existing gas taxes and pilot testing for the mileage fee as the first step toward implementation. An institutional framework, such as the framework under development in Oregon, can help states that are seeking to test or implement new methods of highway financing to realize such goals. The second strategy that can facilitate the use of tolling involves implementing two interrelated and critical components: (1) providing leadership to build support for and advance individual projects and (2) addressing challenges to tolling in project design. (See fig. 13.) We have found that having a strong advocate or advocates—committed both to building support for projects and to ensuring that the projects move forward—is crucial to the success of a project. A corollary to providing committed leadership is ensuring that leaders endorse those projects that most effectively address challenges to tolling in project design. Providing leadership. Although leadership can take different forms, our review revealed that a strong advocate can help build support for a toll project. Transportation agency representatives or political leaders are likely candidates to move a project to public acceptance. For example, in Texas, the Governor and key legislators took the lead in developing and supporting initiatives that would facilitate the use of tolling to finance highway construction. Their efforts led to the enactment of legislation that enabled the state DOT to invest in toll projects. In Indiana, the Governor and the DOT Commissioner have supported tolling as an approach to finance transportation projects by promoting it in the media and in the legislature. However, in some instances, public distrust of political and governmental agencies may require a leader to emerge from another arena. For example, in Minnesota, a task force of state and local officials, citizens, and business leaders was convened in 2001 to explore a range of road pricing options, including the conversion of HOV lanes to HOT lanes, and make recommendations to elected officials. Since tolling had been fairly controversial in the past, decision makers believed that a task force would provide a more credible and independent voice to the general public. Ultimately, the task force supported HOV to HOT conversions, and with the Governor’s support and the passage of legislation authorizing the conversion, the Interstate 394 HOT lanes project was implemented. As a spokesperson for a project, a leader can explain to the public how tolling will address a state’s particular transportation situation. Through the communication of essential ideas and values that a toll proposal encompasses, support for the project can be solidified. Communicating the benefits that tolling can provide for motorists, such as increased efficiency, travel time savings, and choices about when and where to drive, could increase the likelihood of buy-in from the public and political leaders. For example, after examining congestion pricing options in Minnesota, a task force of state legislators, mayors, as well as business, environmental, and transportation leaders recommended that the state should proceed with a demonstration project. This led to the passage of legislation in 2003 supporting the conversion of HOV lanes on Interstate 394 into optional toll lanes, which would allow solo drivers to access the HOV lanes for a fee. With the help of a communications consultant, a project team led by the University of Minnesota’s Hubert H. Humphrey Institute worked to address the concerns of the public and communicate the benefits of the project to the general public. The primary benefits of the project that were conveyed included free access and priority for carpools and bus users, premium speeds in express lanes that are maintained by tolls that vary with demand, and access to the express lanes to single-occupancy vehicles willing to pay a toll. Surveys conducted prior to project implementation revealed that 69 percent of those surveyed were aware of and understood the purpose of the project, and 64 percent believed that allowing single-occupancy vehicles to use the carpool lanes by paying a toll was a good idea. A leader can also stress that tolls can help make up for shortfalls in public funds, allowing needed highway improvements to be completed sooner. According to some transportation officials, this point is particularly relevant when the public does not share a state transportation leader’s view of the state’s needs, or of the challenges associated with addressing those needs within the current fiscal environment, or both. One effective way to communicate the benefits of tolling is through organized public education, outreach, and marketing efforts. Through such efforts, the public can be informed about the transportation situation in the state and the various options that are available to address transportation needs. For example, in California, strong political figures, as well as state and local officials, acted as champions for individual toll projects. Seeking to maximize the efficiency of the transportation system through congestion pricing, these leaders and officials promoted two congestion pricing projects—the Interstate 15 project and the State Route 91 project—using public education and outreach to inform the public about the objectives of the projects and to demonstrate how the objectives would be achieved. Addressing challenges. In identifying toll projects to promote, responsible leaders will likely be interested in projects that mitigate the challenges to tolling. Therefore, addressing concerns about double taxation, inequity, diversion, and coordination in project design can help transportation officials build support for toll projects and secure committed advocates for the projects. When considering a tolling approach, transportation officials can identify ways to effectively address identified challenges and gain a better understanding of the potential impact of the approach through data collection and analysis. Understanding the potential effects of a toll project on traffic flows, specific groups, business activity, and commercial transportation can be particularly useful to transportation officials as they build measures into the project’s design to address identified challenges. Table 2 includes examples of questions and data needs that transportation officials might consider as part of their data collection and analysis. To address general objections to tolling, transportation officials can also consider potential arguments that might be raised on the grounds of double taxation and inequity and think about ways to address these arguments. Addressing these types of concerns is complex because the fairness of shifting to tolling depends on the fairness of the existing system of finance and on how it would be changed by a shift. It can be argued, particularly in states where sales and property taxes are important sources of financing for the transportation system, that the existing system is not very equitable. In contrast, a tolling approach raises revenues directly from those users willing to pay for the service. Furthermore, the economics literature suggests that concerns about inequity can be mitigated to some degree if revenues are distributed in a way that addresses those concerns. Transportation officials can address concerns about double taxation and inequity during project design as a way to counter potential opposition on these grounds. Setting goals for a project that reflect its intended purpose and addressing any key challenge that could affect the achievement of the goals can help transportation officials directly respond to the concerns. For example, if the project goal is to use a tolling approach to relieve congestion, transportation officials could set the toll to reflect the external costs associated with peak period use of the road, and the toll revenues could be dedicated to maintaining, operating, and adding capacity to the facility. This approach might convince users that the toll is not just another tax that would be used for other purposes. In contrast, if the project goal is to address inequities, revenues could be distributed quite differently. Revenues could be distributed to disadvantaged groups in the form of tax rebates or improvements in roads and transit in certain areas. If businesses in specific areas are adversely affected by a project, toll revenues might be used to improve transportation services in those regions. For the Interstate 394 HOT lanes project, state decision makers established the project goals of improving efficiency and maintaining free-flow speeds for transit and carpools using the converted HOV lanes. These goals led to decisions on how the toll revenues would be distributed. The law requires that half of the excess revenues generated from HOT lane facilities be used to improve and maintain transit service. Addressing the coordination issues involved in designing regional and multistate projects is perhaps more daunting. No ideal institutional mechanism appears to be available for managing a regional program; nevertheless, some states have created new institutions to address interstate coordination issues. For example, Oregon and Washington formed a bistate task force to coordinate planning for improvements that cross state borders. The task force includes officials from both state governments, representatives from the affected metropolitan planning organizations (MPO), members of the business community, and residents of each state. The task force is charged with considering all modes of transportation that could potentially ease congestion and improve capacity on Columbia River crossings. The two states are jointly conducting environmental impact studies on highway expansion and transit improvements. The Oregon and Washington DOTs, the Portland and Vancouver MPOs, and the transit authorities from both states are jointly leading this study on the impact of capacity enhancement. Involving officials from both states in evaluating the project can help ensure that projects are equitable and effective in addressing the needs of both states. When proposing a tolling approach, transportation officials should consider promoting one that will produce tangible benefits to users while justifying both the costs of the project and the fees that users will be required to pay for the service. (See fig. 14.) The prospect of such benefits increases the likelihood of the project’s acceptance and can help allay general objections to tolling. Although tolling can take different forms and decisions about its use are state specific, transportation experts have noted that projects that use congestion pricing offer predictability and choice to the user and may be less likely to arouse fierce opposition than projects that offer no new benefits or choice. For example, HOT lane projects, which include both priced and free lanes, offer the benefit of faster trip times for a price in HOT lanes and the choice of a “free,” but probably slower, trip in general purpose lanes. Pricing the entire facility might result in more efficient rationing of limited space on congested roads, but congestion tolls on entire facilities or networks tend to meet with resistance despite their economic efficiency. HOT lanes, on the other hand, may be less likely to encounter resistance because they offer premium service for those willing to pay the fee. While actual experience with road pricing in the United States is still fairly limited, proponents of HOT lanes cite several benefits as follows: First, according to reports and studies issued by FHWA, the Transportation Research Board, the Reason Foundation, and the Brookings Institution, they provide a premium service for a fee to those travelers who have a special need and are willing to pay the fee. Through variable pricing, traffic flows freely even during the height of rush hours. The use of price and occupancy restrictions to manage the number of vehicles traveling on them enables HOT lanes to maintain volumes consistent with uncongested levels of service. Second, studies and reports issued by FHWA and the Reason Foundation note that HOT lanes reduce traffic congestion in the general-purpose lanes by diverting some solo drivers to the HOT lanes, thereby benefiting those drivers who use conventional lanes. Third, according to FHWA and others, HOT lanes can make better use of underutilized carpool (HOV) lanes, thereby alleviating political pressure to decommission them. HOT lanes may provide an opportunity to improve the efficiency of existing or newly built HOV lanes by filling excess capacity that would not otherwise be used. At the same time, HOT lanes continue to serve as HOV lanes for carpools and buses. Finally, reports and studies issued by FHWA and the Reason Foundation note that HOT lanes generate revenue for transportation improvements. Tolls can generate revenue for highway and transit improvements, such as Bus Rapid Transit. HOT lanes have been implemented on Interstate 15 in San Diego, State Route 91 in Southern California, the Katy Freeway and U.S. Route 290 in Houston, and Interstate 394 in Minneapolis. These cases illustrate how transportation officials have advanced projects seeking to achieve the potential benefits that may result from the approach. For example, to guarantee free-flowing traffic, toll prices on the Interstate 15 HOT lanes project are set dynamically, changing every 6 minutes to keep traffic flowing freely in the HOT lanes. In providing motorists with choice and providing premium services, the State Route 91 Express Lanes provide a level of emergency and safety surveillance that, according to surveys conducted by the private firm operating the toll facility, some drivers choose to pay to use the toll lanes even when there is no congestion on the adjacent free lanes. To optimize the use of existing infrastructure, more productivity was sought on the Katy Freeway. HOVs are defined as cars with three or more people during certain peak hours. The Katy Freeway QuickRide program allows cars with two persons to use the HOV lanes if they pay a toll. Daily use by paying users has been between 150 and 200 vehicles for peak periods, and peak hour travelers using the facility save an average of 18 minutes compared with travelers on the nonpriced lanes. Finally, linking the conversion of the HOV lanes to transit to increase mobility and equity was taken into account on the Interstate 394 and Interstate 15 projects. Toll revenues generated on the Interstate 394 HOT lanes are designated for facility and transit improvement and a large portion of surplus revenues on Interstate 15 are used for new bus service. As congestion threatens the nation’s mobility at a time when motor fuel taxes—the principal source of funding for highway improvements—have not kept up with rising costs, federal and state policy has generally been not to increase motor fuel taxes, and state and local decision makers are increasingly looking to a range of alternative mechanisms, including tolling, to advance their surface transportation programs. Over half the states have either adopted tolling or are seriously considering tolling—and this number may increase. A tolling approach can, under the right circumstances, be an attractive choice to state or local governments because of the range of potential benefits—generating new revenues, managing congestion, financing new capacity—that it may provide. But these potential benefits come only by honestly and forthrightly addressing the challenges that a tolling approach presents. State and local governments may be able to address these challenges by pursuing strategies that focus on developing an institutional framework that facilitates tolling, by demonstrating leadership, and by pursuing toll projects that provide tangible benefits to users. While perhaps not applicable to every state to the same degree or in the same way, these strategies form a basis for overcoming potential impediments to tolling and developing a meaningful and effective tolling approach that best suits the environment in each state. In the twenty-first century, demographic trends will drive mandatory federal spending commitments and potentially overwhelm the ability of the federal government to deliver and grow its discretionary programs. This looming crisis requires a fundamental reexamination of existing government programs and commitments, and state and local governments will be challenged to consider new ways of delivering their programs. Regardless of the demand for highway improvements, sustained, long-term, large-scale increases in federal highway grants and state and local spending seem unlikely. In this context, a tolling approach is more than just finding new sources of money. Should states choose to undertake it, a tolling approach has the potential to promote efficiency in the use of infrastructure, allocate costs to users and capture revenue from beneficiaries, stimulate private financing and investment, and provide cost- effective solutions to mobility challenges if viewed as fair and equitable by the public. We provided a draft of this report to the Department of Transportation for review and comment. Officials from the Department indicated that they generally agreed with the report and provided technical clarifications, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to congressional committees with responsibilities for transportation issues; the Secretary of Transportation; and the Administrator, Federal Highway Administration. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at heckerj@gao.gov or (202)512-2834. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The objectives of this report were to examine (1) the promise of tolling to enhance mobility and finance highway transportation, (2) the extent to which tolling is being used in the United States and the reasons states are using or not using this approach, (3) the challenges states face in implementing tolling, and (4) strategies that can be used to help states address the challenges to tolling. We noted where federal programs have played a role in state tolling decisions and projects, but we did not evaluate the effectiveness of those programs. To examine the promise of tolling to enhance mobility and finance highway transportation, we reviewed reports and studies issued by federal agencies and academia, as well as articles from relevant trade journals; relied on perspectives gained from our past work on transportation finance; and analyzed relevant studies and reports issued by transportation experts. To identify the issues related to the transportation system in terms of funding and mobility, we analyzed data on population patterns and growth from U.S. Census reports and vehicle miles traveled and motor fuel tax trends from the Federal Highway Administration’s (FHWA) Highway Statistics reports for 1982 to 2004. We also analyzed data from the 2005 Urban Mobility Report to determine congestion levels and congestion costs for selected cities in the United States. To supplement the information obtained through our literature review, we interviewed officials of the American Automobile Association; International Bridge, Tunnel and Turnpike Association; the American Association of State Highway and Transportation Officials; and the Environmental Defense Fund. To determine the extent to which tolling is being used in the United States, we designed and administered an Internet survey of state department of transportation (DOT) officials and performed a correlation analysis to examine the extent to which state financial and demographic characteristics are associated with their status on using tolling. (For more information about the correlation analysis, refer to app. II.) Our review focused on toll roads and therefore, did not include toll bridges and tunnels. Survey. The questionnaire asked about each state’s current and planned toll road facilities. We sent the questionnaire to the directors of state DOTs in 49 states and Washington, D.C. We did not send a questionnaire to the Louisiana DOT because we administered our survey only a few weeks after Hurricane Katrina struck New Orleans and the Gulf Coast. To minimize nonsampling error, such as measurement errors that can be introduced when respondents do not understand questions or when they do not have information to answer a particular question, we undertook several quality assurance steps. Our social science survey specialists designed draft questionnaires and conducted pretests with state DOT officials in four states. During these pretests, we assessed the extent to which respondents interpreted questions and response categories consistently, the time respondents needed to complete the survey, and the extent to which respondents had the information needed to answer the survey questions. Using the results of these pretests, we revised the questionnaire. We administered the survey to the directors of state DOTs via the Internet during September and October 2005, e-mailing the directors a Web link to our questionnaire and requesting that they or their designees complete it. We received responses from 49 states and Washington, D.C.—a 100 percent response rate. We analyzed the data using statistical software. We compared the responses to key survey questions with information obtained from our interviews with state DOT officials and from state applications to the FHWA’s tolling pilot program. In four instances, data from these sources were inconsistent. We contacted DOT officials in these states to resolve these inconsistencies and adjusted the survey results accordingly. Semistructured interviews. To determine the reasons states use or do not use tolling, the challenges to tolling, and the strategies that have been used to address the challenges, we conducted semistructured interviews with state transportation officials from all states except Louisiana, and interviewed stakeholders in six states that we visited to determine the reasons states use or do not use the approach, the challenges to using the approach, and the strategies that have been used to address the challenges. We did not gather information directly from the public. We developed a set of questions to ask in semistructured interviews of state transportation officials to gain more detailed information on states’ reasons for tolling or not tolling and the challenges states face in tolling. Having visited 6 states and interviewing transportation officials there, and excluding Louisiana, we conducted semistructured interviews from the remaining states. We did not interview transportation officials in Louisiana because our semistructured interviews were conducted shortly after Hurricane Katrina struck New Orleans and the Gulf Coast. To determine the appropriate state official to interview in each state, we relied on information from FHWA Division Administrators for the respective states. After gathering the information from FHWA, we contacted and interviewed the state officials and conducted our interviews. We analyzed the data from the semistructured to identify major themes. Site visits. To supplement information from our survey and semistructured interviews, we visited six states that were in various stages of planning or constructing diverse types of toll projects. We selected the states for their diversity in terms of geography, transportation needs, and tolling plans. The states were Minnesota, Mississippi, Missouri, Oregon, Texas, and Virginia. We visited those six states to obtain more detailed information on the challenges states are encountering and the strategies states employ or are considering employing to toll. We judgmentally selected four states where tolling was either planned or under way and two states where tolling had been proposed and rejected by a vote of either the citizens or the legislature. During our site visits, we interviewed state, local, and FHWA officials. To identify strategies that can be used to address the challenges to tolling, we analyzed the results of our review on tolling efforts and built on the perspectives gained from our past work on federal investment strategies. We also analyzed reports and studies issued by transportation experts and academia on finance reform to identify broad strategies that can be used to help transportation officials adopt and implement a tolling approach. We performed our work from June 2005 through June 2006 in accordance with generally accepted government auditing standards. To identify state characteristics that are linked with states’ decisions to toll roads, we performed a correlation analysis that examined the relationship between those decisions and various state demographic and financial characteristics. These characteristics included, but were not limited to, population; per-capita income; gross state product; vehicle miles traveled; capital expenditures on highways; and local, state, and federal highway trust fund appropriations. To perform this analysis, we updated the data that we had collected from the Federal Highway Administration (FHWA) and Bureau of Economic Analysis (BEA) for our previous reports, and converted them to inflation-adjusted 2004 dollars using BEA’s chain-type price index for gross domestic product (GDP), as well as its state highway and streets chain-type price index. For those characteristics that represented measures of change, we used the changes in these factors from 1990 to 2000 to be consistent with the years when the U.S. Census of Population and Housing was conducted and, for those characteristics that represented measures of levels, we used data from 2002. We chose 2002 because data for some characteristics were not available for more recent years. We divided states into two groups, tolling and nontolling, based on information gathered from our state survey and FHWA documentation. We considered the associations between tolling and nontolling with each demographic or financial characteristic singly and did not control for the effects of other characteristics on these tolling decisions (as we would do in a multivariate analysis). For this reason, the results of our correlation analysis indicate a simple statistical relationship between tolling status and a study characteristic and do not imply causality. Interactions may be more complex when multiple characteristics are simultaneously associated with tolling status. In addition, our results may be sensitive to how we defined tolling status. Although certain characteristics in a state’s finances and tax policies might be related to financial need, our correlation analysis found only limited relationships between various state demographic and financial measures and whether states are and are not planning toll roads. (See table 3.) For example, we found the following: There is wide variation in state motor fuel tax rates among the states, ranging, as discussed earlier, from 7.5 cents to 28.1 cents per gallon in 2002, and we investigated whether state motor fuel tax rates are correlated with decisions to toll. While a slight inverse relationship exists between a state’s decision to toll and the level of its motor fuel taxes, the slightness of this relationship suggests that states planning toll roads are not much more likely to be the ones with lower motor fuel tax rates than other states. While state incomes vary greatly, a state with higher motor fuel tax rates is also more likely to have higher fuel tax revenues as a percentage of its gross state product than states with lower motor fuel tax rates. As with fuel tax rates, a slight inverse relationship exists between a state’s decision to toll and the level of its fuel tax revenues as a percentage of its gross state product. However, the slightness of this relationship suggests that states planning toll roads are not much more likely to be the ones with lower fuel tax revenues as a percentage of their gross state product than other states. The extent to which motor fuel taxes are disbursed to nontransportation uses could contribute to what state officials characterized as general shortfalls in highway funding. The relationship between states planning toll roads and the use of motor-fuel tax revenue is only slight, suggesting that states planning toll roads are not much more likely to have more of their fuel tax revenues used for nontransportation programs than other states. Although there appears to be little relationship between state finance and tax policy characteristics and tolling, our analysis indicates that there are some other factors that are related to states' decisions on tolling. For example, both the size of the state, whether measured by population or by vehicle miles traveled (VMT), and whether it is growing rapidly, again measured by population or VMT growth, are directly related to states’ decisions to toll. These relationships are consistent with statements made by state transportation officials on the use of tolling to fund highways due to increasing demand for highway travel. In addition, our analysis revealed a relationship between federal funding and a state’s decision to toll. Each state collects federal motor fuel taxes that are deposited into the Highway Trust Fund and receives grants through the federal-aid highway program according to formulas specified in law. The states that are planning toll roads are moderately associated with the federal-aid “donor states”—those states that contribute more to the Highway Trust Fund than they receive in federal highway grants. Thus, donor states are statistically more likely to be planning toll roads than donee states—those states that receive more in grants than they collect. Question 1: Are there any toll road facilities in your state? Please do not count any tolled bridges or tunnels as toll roads. 1. Yes 2. No (Skip to question 10.) Question 2: Are any of these facilities new roads that were built on new alignments? 1. Yes 2. No Question 3: Were any of these facilities previously untolled? 1. Yes 2. No Question 4: Are any of these facilities new lanes added to roadways that were previously untolled? 1. Yes 2. No Question 5: Are any of these facilities HOT lanes (in which single occupancy vehicles can gain access to HOV lanes by paying a toll)? 1. Yes 2. No Question 6: Do any of these facilities charge tolls that vary by time of day? 1. Yes 2. No Question 7: In what year did the first toll road facility in your state open to traffic? (Please do not consider facilities that opened before 1938.) Question 8: In what year did the most recent toll road facility in your state open to traffic? (Please do not consider facilities that opened before 1938.) Question 9: Which of the following agencies participate in managing the toll road facilities in your state? 1. State Department of Transportation 2. Public toll authority 3. Other state agency 4. Local or regional government agency 5. Private entity Question 10: Are there plans in your state to build any toll road facilities for which an Environmental Review and a Record of Decision have been completed? 1. Yes 2. No (Skip to question 19.) In addition to the individual named above, Steve Cohen, Assistant Director; Mark Braza; Jay Cherlow; Bess Eisenstadt; Simon Galed; Moses Garcia; Bert Japikse; Terence Lam; Liz McNally; and Don Watson made key contributions to this report. | Congestion is increasing rapidly across the nation and freight traffic is expected to almost double in 20 years. In many places, decision makers cannot simply build their way out of congestion, and traditional revenue sources may not be sustainable. As the baby boom generation retires and the costs of federal entitlement programs rise, sustained, large-scale increases in federal highway grants seem unlikely. To provide the robust growth that many transportation advocates believe is required to meet the nation's mobility needs, state and local decision makers in virtually all states are seeking alternative funding approaches. Tolling (charging a fee for the use of a highway facility) provides a set of approaches that are increasingly receiving closer attention and consideration. This report examines tolling from a number of perspectives, namely: (1) the promise of tolling to enhance mobility and finance highway transportation, (2) the extent to which tolling is being used and the reasons states are using or not using this approach, (3) the challenges states face in implementing tolling, and (4) strategies that can be used to help states address tolling challenges. GAO is not making any recommendations. GAO provided a draft of this report to U.S. Department of Transportation (DOT) officials for comment. DOT officials generally agreed with the information provided. Tolling has promise as an approach to enhance mobility and finance transportation. Tolling can potentially enhance mobility by reducing congestion and the demand for roads when tolls vary according to congestion to maintain a predetermined level of service. Such tolls can create incentives for drivers to avoid driving alone in congested conditions when making driving decisions. In response, drivers may choose to share rides, use public transportation, travel at less congested times, or travel on less congested routes, if available. Tolling also has the potential to provide new revenues, promote more effective investment strategies, and better target spending for new and expanded capacity. Tolling can also potentially leverage existing revenue sources by increasing private-sector participation and investment. Over half of the states in the nation have or are planning toll roads to respond to what officials describe as shortfalls in transportation funding, to finance new highway capacity, and to manage road congestion. While the number of states that are tolling or plan to toll has grown since the completion of the Interstate Highway System, and many states currently have major new capacity projects under way, many states report no current plans to introduce tolling because the need for new capacity does not exist, the approach would not generate sufficient revenues, or they have made other choices. According to state transportation officials who were interviewed as part of GAO's nationwide review, substantive challenges exist to implementing tolling. For example, securing public and political support can prove difficult when the public and political leaders argue that tolling is a form of double taxation, is unreasonable because tolls do not usually cover the full costs of projects, and is unfair to certain groups. Other challenges include obtaining sufficient statutory authority to toll, adequately addressing the traffic diversion that might result when motorists seek to avoid toll facilities, and coordinating with other states or jurisdictions on tolling projects. GAO's review of how states implement tolling suggests three strategies that can help facilitate tolling. First, some states have developed policies and laws that facilitate tolling. For example, Texas enacted legislation that enables transportation officials to expand tolling in the state and leverage tax dollars by allowing state highway funds to be combined with other funds. Second, states that have successfully advanced tolling projects have provided strong leadership to advocate and build support for specific projects. In Minnesota, a task force was convened to explore tolling and ultimately supported and recommended a tolling project. Finally, tolling approaches that provided tangible benefits appear to be more likely to be accepted than projects that offer no new tangible benefits or choice to users. For example, in California, toll prices on the Interstate 15 toll facility are set to keep traffic flowing freely in the toll lanes. |
Medicare provides health insurance for nearly all elderly Americans (those aged 65 and older) and certain of the nation’s disabled. Most Medicare beneficiaries receive services through the fee-for-service sector. However, as of April 1998, roughly 15 percent of Medicare’s beneficiaries—up from about 7 percent in mid-1995—were enrolled in risk contract HMOs. Of these, about 50,000 beneficiaries are classified as institutionalized each month. HCFA, an agency within HHS, administers the Medicare program and is responsible for ensuring that Medicare HMOs comply with data reporting, beneficiary protection, and care delivery requirements. HCFA seeks to ensure that HMOs meet financial solvency and enrollment requirements, do not earn excessive profits, operate internal quality assurance systems, and establish grievance and appeals procedures. HCFA also implements the capitation rate formula authorized by legislation and calculates payments for each HMO. Further, HCFA is responsible for monitoring HMOs to ensure that all Medicare requirements are met, including that HMOs’ reports of beneficiaries’ institutional status are accurate. HCFA is also responsible for ensuring that corrective actions are taken if overpayments, underpayments, or other errors are discovered. HCFA established a national policy in 1994 permitting HMOs to seek retroactive payment adjustments—for either overpayments or underpayments—for the prior 3-year period. About 1 percent of Medicare’s roughly 5 million HMO enrollees are classified as living in institutions, as compared with about 7 percent of beneficiaries covered under Medicare fee-for-service. In 1997, Medicare paid $197 million more to HMOs because of their enrollees’ institutional status. The distribution of enrollees with institutional status varies among HMOs and among geographic regions. In December 1997, most institutional enrollment rates for individual HMOs ranged from 0 to about 10 percent, although one outlier exceeded 44 percent. (See fig. 1.) Medicare’s risk contract program was designed to save Medicare money by paying HMOs 95 percent of the amount Medicare estimated it would spend on similar beneficiaries in the fee-for-service sector. It was believed that HMOs would have lower costs because of their greater emphasis on preventive health services and their incentive to eliminate unnecessary services. The base capitation rate in each county—the amount an HMO receives each month for enrolling an average-cost beneficiary—is determined by law, largely on the basis of Medicare’s per capita fee-for-service spending. An HMO’s monthly capitation payment is adjusted for the expected care costs of each individual enrolled. To make the adjustment, HCFA assigns weights to defined risk classes of beneficiaries on the basis of age; sex; and disability, Medicaid, institutional, and employment status. The weights are expressed as ratios of the national average per capita costs for each risk class relative to the overall national average. For example, in 1997, compared with the national average weight set at 1.0, the weight assigned to the risk class for men aged 85 or older with institutional status was 2.25. Thus, HCFA’s estimate was that institutionalized men aged 85 and older would have health care costs that were 2.25 times the costs for the average beneficiary. HCFA adjusts capitation rates for most institutionalized beneficiaries upward to reflect the expected differential. The additional monthly payment amount associated with institutional status can be substantial. For example, in 1998 a Los Angeles HMO receives $618 more per month for a 65-year-old man living in an institution than for one who is not living in an institution ($1,071 instead of $453). (See fig. 2 for a comparison of monthly HMO payments in Los Angeles for institutional and noninstitutional enrollees.) In 2000, HCFA is required by the BBA to implement a new risk adjustment methodology that uses direct indicators of health status—in addition to any other demographic adjusters such as age and sex—to better reflect differences in individuals’ expected health care costs. Until recently, HCFA has defined the term “institution” to include skilled nursing facilities, nursing homes, sanitoriums, rest homes, convalescent homes, long-term care hospitals, domiciliary homes, swing-bed facilities,and intermediate-care facilities. HCFA based the higher payment rates for institutional status on historical evidence that beneficiaries living in these types of facilities had greater medical needs and higher medical costs than those who lived in the community. These types of facilities, however, have evolved to serve individuals with varied health care needs. Thus, HCFA’s broad criteria permitted an HMO to classify virtually any residential facility as an institution for payment purposes. Consequently, HMOs considered as institutions many facilities that were housing seniors whose expected health needs were at or below those of the average Medicare beneficiary. HMOs have had an incentive to broadly interpret HCFA’s institution criteria. For example, if an HMO classified a residential facility as an institution, the HMO would receive a much higher capitation payment—up to $766 more per month—for every enrollee living in that facility. In fact, the increased institutional payments to one relatively small HMO amounted to about an additional $135,000 for 1 month. For HMOs with larger institutionalized enrollment, the annual additional capitation payments could be on the order of $4 million to $9 million. Medicare’s payment system is based on the assumption that HMO enrollees living in institutions generate above-average health care costs. However, some facilities classified by HMOs as institutions clearly did not serve seniors with serious health problems. For example, among the designated institutions we visited, one (called by its manager an “independent living facility”) provided private apartments, meals in a communal setting, and field trips to tourist and shopping sites. About 12 percent of the residents owned and drove their own cars. The facility did not provide any medical care. Another facility we visited—a retirement center—was characterized by its marketing brochure as “a clean, comfortable home for those who do not need nursing care.” This facility employed a full-time activity director and housed several residents who drove their own cars. Moreover, HCFA’s institutional payment policy is unclear when a single facility offers a range of assistance levels—from independent living arrangements to skilled nursing care. For example, one residential community we visited consisted of two facilities: one, a skilled nursing care facility, provided subacute, skilled, and custodial care; the other, an independent living facility, provided limited assistance, such as helping individuals get to the communal dining room. An HMO planned to classify the entire residential community as an institution. However, the residential community’s manager disagreed that the independent living facility constituted an institution. Although the HMO ultimately chose not to classify those beneficiaries living in the independent facility as institutionalized, no HCFA policy would have prevented such classification. Facilities of this nature pose continued challenges in appropriately determining which beneficiaries should be classified as institutionalized for payment purposes. Even though some regional HCFA officials felt that some facilities should not have been classified as institutions, these officials believed they had little basis for challenging any classification. In a 1995 memorandum to HCFA headquarters, for example, the director of a HCFA region’s managed care operations noted that “the manual provides no guidance regarding the level of care provided to residents [needed to qualify for institutional status]” and that “HCFA has the obligation to provide better guidance to plans regarding the types of facilities which may be designated as ‘institutions.’” On July 24, 1997, HCFA issued a policy letter that narrowed the definition of eligible institutions effective January 1, 1998. The letter cited a history of interpretation problems in using the broader definition as well as the concerns we, the HHS Inspector General, and the agency itself have raised about the potential for making improper payments to HMOs. Under the new definition, only specified Medicare- or Medicaid-certified institutions are included, thus limiting eligibility to institutions qualifying under the Social Security Act, such as skilled nursing and nursing facilities; intermediate care facilities for the mentally retarded; and psychiatric, rehabilitation, long-term care, and swing-bed hospitals. Tying eligibility to certain Medicare- and Medicaid-certified institutions effectively rules out eligibility for independent or low-level assisted living facilities. In principle, this change could significantly improve HCFA’s ability to ensure that the higher capitation rate is being paid on behalf of only those beneficiaries likely to have higher health care needs and costs. However, in practice, the collocation of independent living arrangements with eligible institutions and HCFA’s infrequent and narrow review of HMO records, as discussed in the next section, may limit the practical impact of HCFA’s new policy. The process HCFA uses to verify that HMOs appropriately claim the institutional payment rate is inadequate. HCFA relies on the HMOs themselves to identify and report the names of beneficiaries for whom the HMOs should receive the institutional rate. Using these mostly unaudited, HMO-reported data, HCFA adjusts the capitation payments for the HMOs’ Medicare members who live in institutions. HCFA does not conduct either comprehensive or spot checks at the institutional facilities to assess the accuracy of the institutional status data reported by HMOs. Instead, HCFA regional staff make site visits to each HMO about every 2 years and examine a small sample of beneficiary records maintained by the HMO. Results of previous HHS Inspector General audits and our work show that the lack of effective oversight fails to hold HMOs accountable for submitting accurate records and thus does not ensure that HMOs receive appropriate payments. HCFA bases its institutional rate adjustments solely on HMO-reported data. Each HMO is responsible for establishing a system to identify and report its institutionalized beneficiaries to receive the pay adjustment from HCFA. The reporting process works roughly as follows: Each month, the HMO identifies those members who have resided in eligible facilities for 30 consecutive days prior to the reporting month and sends HCFA a list of beneficiaries qualifying for institutional status. Using the HMO’s information, HCFA then develops and sends to the HMO its monthly report of the HMO’s qualifying beneficiaries (the HMO is responsible for informing HCFA of any further changes in beneficiary status). On the basis of the final, HMO-corrected report, HCFA adjusts—generally substantially increasing—the HMO’s capitation payment for each institutionalized beneficiary. HCFA regional staff and HMO staff concur that, for a variety of reasons, HMO data on institutionalized beneficiaries can be inaccurate. The financial incentive for HMOs to classify beneficiaries as institutionalized is one possible explanation for inaccurate data. Other explanations include financial incentives for physicians to misclassify beneficiaries, inaccurate data reported by institutions, and data entry errors by HMOs. Some HMOs may have difficulty ensuring accurate data because their providers financially benefit if enrollees are classified as living in institutions. For example, the primary care providers in the Minnesota-based HMO cited in a 1995 HHS Inspector General’s reportreceived from 85 to 90 percent of Medicare’s per capita payment, while the HMO kept the remainder. The HMO required these providers to notify it when their Medicare patients entered or left an institutional setting or otherwise changed their status. However, the Inspector General found that the providers failed to do so and did not correct HMO reports sent to them for reconciliation purposes. As a consequence, the HMO substantially overreported the number of enrollees living in institutions. HMO staff also reported difficulty obtaining accurate information on beneficiaries’ current residence in particular facilities. HMO staff who were responsible for verifying enrollees’ institutional status said they typically contacted facilities by phone or mail monthly to determine which enrollees resided in those facilities. However, facilities housing an HMO’s Medicare enrollees do not necessarily have a contractual or financial relationship with the HMO. Consequently, these facilities have no compelling reason to comply with an HMO’s information requests. HMO staff reported instances of not learning of changes in beneficiaries’ institutional status, even when the HMOs had requested verification and received regular responses from facility personnel. Data entry errors are a third possible reason for inaccurate data. The 1995 Inspector General’s report attributed some instances of institutional status misclassification and Medicare overpayments to the HMO’s own data entry errors. Verifying HMOs’ historical institutional status data is even more difficult than ensuring the accuracy of current data. The period of time being scrutinized is longer, and HCFA’s policy of allowing HMOs 3 years to correct institutional status data and adjust payments accordingly compounds the problem. For example, after a 1992 monitoring review of an HMO, HCFA required the HMO to correct problems in its procedures for verifying its Medicare enrollees’ institutional status and to conduct an audit of its own institutional status records. As a result of the audit, the HMO reported nearly $5 million in overbillings and more than $4.5 million in underbillings to Medicare during a 2-1/2-year period. HCFA accepted the results of the HMO’s self-audit and the HMO’s request to repay Medicare the difference between the two amounts (approximately $500,000). The accuracy of the HMO’s audit results was found to be questionable, however, when, according to one HCFA official we interviewed, the HMO later attempted to reverify its audit findings and was unable to do so because facilities had changed the information they had originally reported to the HMO. Concerned about potential errors in HMOs’ historical institutional status data, three of the four HCFA regional offices we contacted for this study do not permit, or permit only by exception, retroactive reimbursements. Officials in HCFA’s central office said that the regional offices should be following the national policy, which allows corrections in institutional status, and related reimbursements, for up to 3 years. They also said, however, that they are aware that regional offices are not doing so. These officials said that, because of the frequent changes in HMOs’ historical institutional status data, following the national policy would require substantial additional regional work to validate and update the necessary corrections to HCFA’s payment system. Our review and the Inspector General audits underscore the need for HCFA to improve its oversight of the HMO data used in determining Medicare payments to HMOs. HMOs’ records are normally checked by HCFA only during routine monitoring visits, which occur about every 2 years. During a monitoring visit, HCFA staff focus primarily on whether the HMO has a data verification system in place. That is, they review the HMO’s policies and procedures for both updating the HCFA report on institutionalized beneficiaries and contacting facilities to verify residence and length-of-stay information. HCFA regional staff also contact a few facilities to confirm the residence and length of stay of some beneficiaries. Specifically, HCFA protocol requires regional staff to verify the status of 30 enrollees living in at least three different institutions and to contact three of the institutions. HCFA’s verification practices may be too superficial to determine whether HMOs accurately report beneficiaries’ institutional status. For example, after HCFA reviewed one Minnesota-based HMO’s institutional reporting procedures and records and found no problems, an Inspector General audit of the same HMO revealed significant errors. The Inspector General examined the records of 100 enrollees randomly selected from the 1,941 Medicare beneficiaries the HMO listed as living in an institution during April 1994. By checking the HMO’s records against those of the institutions, the Inspector General determined that 15 of the 100 beneficiaries did not reside in the listed institution. The Inspector General also checked historical records and found that some of the 15 misclassified beneficiaries had never lived in an institution while enrolled in the HMO. In some cases, the HMO had misclassified the beneficiaries and collected the institutional payment rate for over 5 years. Total overpayments for the 15 misclassified beneficiaries amounted to $93,252. In 1993, the Inspector General cited two Massachusetts-based HMOs for receiving enhanced payments on the basis of HMO data that inaccurately classified beneficiaries as institutionalized. Moreover, the HMOs’ internal reporting systems did not accurately reflect the discharge dates of some institutionalized beneficiaries. The Inspector General identified overpayments of about $215,000 for the two HMOs over roughly a 2-year period ending June 30, 1993. When HCFA staff identify faulty HMO data on institutional beneficiaries, the agency frequently does little to determine the full extent of the errors or the total overpayments generated by the faulty data. HCFA generally requires only that the HMO develop a corrective action plan describing how the HMO intends to generate better data. In some cases, HCFA also requires HMOs to self-audit their prior institutional reporting. After HCFA identifies HMO data errors, mandates corrective actions, and approves a corrective action plan, it often waits 2 years or more before verifying HMO compliance. Once HCFA has approved an action plan to correct an identified problem, the agency typically does not check to determine whether the HMO has implemented the plan until HCFA staff conduct the next routine monitoring review. Sometimes this monitoring review is delayed beyond the routine 2-year schedule, even when a serious reporting problem was found to have existed earlier. Such was the case for the Minnesota-based HMO cited in a previous example. HCFA did not review the HMO’s institutional records until the fall of 1997, over 2 years after the Inspector General reported the HMO’s inaccurate record-keeping and resulting overpayments, even though the HMO continued to maintain a rate of institutionalized enrollment that was five times the national average. The Inspector General is completing a study designed to determine the extent of institutional status misreporting and to project total national overpayments. The Inspector General is reviewing the institutional records at eight HMOs to determine whether beneficiaries resided in the facilities listed in the HMOs’ records for the dates indicated. If data errors are found, the Inspector General intends to project and recoup overpayments from these specific HMOs and also use the projections to estimate national overpayments. Preliminary results indicate data problems at five of the eight HMOs. Because the Inspector General’s study does not attempt to determine whether the listed facilities fit HCFA’s criteria for an eligible institution, the study’s overpayment estimate may understate the full extent of the problem. HCFA’s procedures do not ensure that Medicare overpayments are recovered when HMO data reporting errors are found. In such cases, HCFA requires HMOs to improve data reporting in the future, but often the agency makes no attempt to estimate and recover overpayments resulting from the faulty data. HCFA sometimes, but not always, requires HMOs to perform self-audits and bases payment adjustments on the results. However, beyond the limited number of beneficiary records reviewed during routine monitoring visits, HCFA does not attempt to verify HMO data or the results of HMOs’ self-audits. A random sample of records of beneficiaries listed as living in institutions can be useful in projecting and recovering total Medicare overpayments. For example, in the case of the Minnesota-based HMO discussed earlier, the Inspector General found that the status of 15 out of 100 randomly selected beneficiaries classified by the HMO as living in institutions had been misreported. The overpayments associated with the 15 beneficiaries amounted to $93,252. On the basis of the random sample, the Inspector General projected that the HMO had inappropriately received at least $861,000, and perhaps as much as $2.8 million, from January 1989 through September 1994 for all enrollees misclassified as living in institutions. The Inspector General’s findings enabled HCFA to recoup about $861,000 from the HMO. HCFA’s most recent data show that the current institutional risk adjuster substantially overcompensates HMOs for the institutionalized beneficiaries they serve. As a result, in July 1997, HCFA proposed new weights for the institutional risk adjuster to more accurately reflect the health care costs of institutional beneficiaries. However, in September of 1997, HCFA halted implementation of the new weights, announcing that provisions of the recently passed BBA precluded the agency from modifying any of the risk factors’ weights at that time. Nonetheless, HCFA’s new criteria for eligible institutions—which exclude facilities housing beneficiaries with relatively low expected health care costs—should help reduce overpayments to HMOs serving institutional beneficiaries. In the course of our review, HCFA developed new cost estimates for institutionalized beneficiaries that were based on the 1993 MCBS data. The expected health care costs for institutionalized beneficiaries, based on the 1993 MCBS, were much lower than those estimated from the 1974-76 survey data, which are currently used to set the risk factor for institutional beneficiaries. Using the new cost data, HCFA calculated lower adjustments to the capitation payments for aged institutionalized beneficiaries. For example, a Medicare HMO that enrolls a 74-year-old male beneficiary living in an institution in Los Angeles receives a monthly payment of about $1,307 in 1998. If HCFA had implemented its revised rates, the HMO would be receiving about $761 per month—an amount that more accurately reflects the expected costs associated with institutionalized beneficiaries. Table 1 shows that the Medicare part A component of the monthly capitation payments would have fallen by as much as 24 percent for beneficiaries aged 85 and older and by as much as 62 percent for beneficiaries aged 65 to 84. The decrease in the part B component would have been somewhat less. Although HCFA announced plans in July 1997 to recalculate the weights of the current demographic risk factors, including the institutional risk adjuster, it halted this effort after the enactment of the BBA in August 1997. HCFA reverted to the old factors for the 1998 rate calculations because the BBA specified a new methodology for setting the basic capitation rate in each county that explicitly used the established 1997 county rates as a base. HCFA officials stated that the new weights could only have been applied to capitation payment calculations if the weights had also been used in the calculation of the county rates. Although HCFA uses the institutional risk adjuster to take into account the expected higher costs of health care for institutionalized beneficiaries, research on the risk adjuster indicates that institutional residence is actually only weakly related to a beneficiary’s expected health care costs. In a 1977 report, HCFA staff suggested that the average medical expenditures for institutionalized beneficiaries could vary widely by type of facility “to the extent that legal requirements and administrative policies of institutions differentiate among the characteristics of their residence.” Our own analysis of the 1992-94 MCBS, the most recent data available at the time of our analysis, found substantial differences in Medicare costs among beneficiaries living in institutions. The average annual Medicare cost for beneficiaries in nursing homes—at about $8,000, for example—was more than $3,700 higher than the average annual cost for beneficiaries in assisted living facilities. HMOs could benefit financially if they were able to draw their institutional populations disproportionately from those types of institutions whose average beneficiary costs were lower than those of other institutions. HCFA’s new definition of eligible institutions includes certified nursing facilities but generally excludes assisted living facilities. This narrower definition could potentially improve the accuracy of HMO payments for the beneficiaries they serve by limiting the potential variation in average expected health care costs among different types of institutions. Recent data clearly show that HMOs can be overcompensated for the institutional beneficiaries they enroll. Although provisions of the BBA prevent HCFA from eliminating these excess payments at this time, HCFA will have an opportunity to fully address this problem when it develops a new set of risk adjusters, mandated by the BBA, to be implemented in 2000. By tightening the definition of what constitutes an institution, HCFA has taken a step toward improving the accuracy of HMO payments. For example, HMOs should no longer receive enhanced capitation payments for serving beneficiaries in independent living facilities. Nonetheless, given HCFA’s HMO monitoring practices, it is doubtful that the agency can quickly or effectively determine the extent to which HMOs are complying with the new definition. Moreover, the Medicare program remains open to potential abuse by HMOs because HCFA performs only infrequent and limited checks of HMO-reported data. HCFA’s use of unaudited HMO data to determine payments to HMOs engenders little confidence in the accuracy of the data and resulting payments. HCFA also lacks a systematic approach for identifying and recovering total overpayments once HMO reporting errors are discovered. Instead, HCFA typically requires HMOs only to develop corrective plans to gather and report more accurate data in the future. Even when serious HMO reporting errors—resulting in substantial overpayments—have been discovered, HCFA may wait 2 years or more before checking to see if the HMO has implemented a revised data gathering and reporting system. To better protect the integrity of Medicare capitation payments, we recommend that the HCFA Administrator take the following actions: Establish a system to estimate and recover total overpayments when institutional status data errors are detected. Allow HMOs to revise records and claim retroactive payment adjustments for beneficiaries with institutional status only when HMO records have been verified by an independent third party. Conduct timely follow-up reviews of those HMOs found to have submitted inaccurate institutional status data. Use more recent cost data to calculate the institutional risk adjuster in the event HCFA continues to include institutional status as a part of its new risk adjustment methodology. HCFA agreed with our recommendations to improve the integrity of capitation payments for institutionalized beneficiaries. HCFA noted several initiatives it is considering to improve oversight and rate-setting methods. We believe that these initiatives are a step in the right direction but that HCFA must remain committed to implementing the new methodologies. The full text of HCFA’s comments appears in appendix I. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after its issue date. At that time, we will send copies to the Secretary of Health and Human Services; the Director, Office of Management and Budget; the Administrator of the Health Care Financing Administration; and other interested parties. We will also make copies available to others upon request. This work was done under the direction of James Cosgrove, Assistant Director. If you or your staff have any questions about this report, please contact Mr. Cosgrove at (202) 512-7029 or me at (202) 512-7114. Other GAO contacts and staff acknowledgments are listed in appendix II. The following team members also made important contributions to this report: Hannah F. Fein, Senior Evaluator; Robert DeRoy, Assistant Director; and George Bogart, Senior Attorney. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Health Care Financing Administration's (HCFA) oversight of Medicare payments to health maintenance organizations (HMO) for institutionalized beneficiaries, focusing on: (1) the criteria HCFA uses to determine a beneficiary's institutional status; (2) the methods HCFA employs to ensure that HMOs properly classify beneficiaries as institutionalized; and (3) whether the higher capitation rate for beneficiaries who live in institutions is justified by higher health care costs. GAO noted that: (1) HCFA's broad definition of institution allowed HMOs to claim institutional status for individuals residing in facilities not likely to house sicker-than-average seniors; (2) some of the facilities GAO visited that HMOs had classified as institutional residences provided no medical care but rather offered a menu of recreational activities for seniors capable of living independently; (3) HCFA acted on GAO's findings and those of others by narrowing the definition of eligible institutions, effective January 1, 1998; (4) even with more stringent criteria, however, HCFA relies on the HMOs to determine which beneficiaries qualify for institutional status; and conducts only limited reviews to confirm the accuracy of HMO records; (5) studies by the Department of Health and Human Services Inspector General reviewing the accuracy of HMO institutional status data support GAO's finding that HCFA's reviews are not adequate to detect the extent of errors or overpayments resulting from HMOs' misclassification of beneficiaries; (6) the task of ensuring accurate data may be further complicated by HCFA's policy that allows HMOs 3 years to retroactively change institutional status data in beneficiary records; (7) the lack of a systematic approach for identifying errors limits HCFA's efforts to recover overpayments and ensure that appropriate payments are made to HMOs; (8) HCFA generally waits 2 years to verify that HMOs have corrected inaccurate recordkeeping systems, even when serious errors have been identified; (9) HCFA continues to use 20-year-old cost data in determining the payment rates for institutionalized enrollees and, as a result, HCFA overcompensates HMOs for their enrolled, institutionalized beneficiaries; (10) this overpayment problem may be corrected when HCFA implements a revised set of risk factors in 2000; (11) however, provisions of the Balanced Budget Act of 1997 that use 1997 rates as the basis for 1998 and future rates effectively preclude a revision to the institutional risk factor at this time; (12) while HCFA has revised its definition of eligible institutions, concerns remain that HCFA's oversight of payments for institutional status is inadequate; (13) HCFA has no system to estimate and recover total overpayments when institutional status errors are detected or to verify HMOs' retroactive adjustment requests; and (14) further, HCFA does not ensure timely review of those HMOs found to have submitted inaccurate institutional status data, and its use of outdated cost data in determining payments continues to overcompensate HMOs for institutionalized enrollees. |
To accomplish our objectives, we reviewed Customs’ accountability reports, including its audited financial statements, for fiscal years 1997 through 2000, and analyzed its CFP receivables and the related allowance for uncollectible accounts, as well as other financial information for the 4- year period. We also obtained an understanding of Customs’ CFP debt collection policies and procedures and of applicable federal rules and regulations. We nonstatistically selected 17 CFP claims from a list of fines and penalty receivables outstanding as of September 30, 1999, at the San Francisco Fines, Penalties, and Forfeitures (FP&F) Office. Our purpose in selecting these claims was to perform a walk-through to confirm our understanding of Customs’ processes for managing and collecting CFP debt. We selected the 17 claims on the basis of several factors, including high dollar value and unpaid CFP receivable balances as of September 30, 1999. From Customs’ Seized Asset and Case Tracking System (SEACATS), we obtained a list of all 7,184 Customs CFP claims still outstanding (open) as of September 30, 82,273 and 95,441 CFP claims during fiscal years 1999 and 2000, respectively, for which cases were canceled or the collection activity was terminated because amounts were paid in full or written off (closed). These 184,898 claims had a total value of approximately $7 billion. We sorted the population of CFP claims into four groups: Group 1 included 2,469 CFP claims for which Customs stopped collection activities during fiscal years 1999 and 2000 and wrote off the receivable amounts totaling approximately $41 million. In addition, we included in group 1 the 173 CFP claims, totaling approximately $28 million, that were written off during fiscal year 1998, since this was the largest amount written off from fiscal year 1997 through fiscal year 2000. From these 2,642 CFP claims, we selected all claims involving write-offs of receivable amounts greater than $1 million. The receivable total for the 8 selected claims in this group equaled about $47 million, or just over two-thirds of the $68 million written off during the 3-year period. We used these 8 claims to test various attributes of Customs’ processes for collecting CFP claims at each of the applicable FP&F offices. Group 2 consisted of 32,675 CFP claims for which Customs represented that collection actions were not necessary or appropriate and had not been performed. According to Customs’ records, these claims, totaling about $4 billion, or 57 percent of the total dollar value of the entire population, involved cases in which (1) Customs decided that the alleged violation did not occur, (2) there was insufficient probable cause to support an alleged violation, (3) substantial mitigating factors caused Customs to decide to remit (forgive) the penalty in full without payment of any mitigated amount, or (4) system input errors occurred, typically resulting in cancellation of the original case and its replacement with a new case. We statistically selected 36 claims from group 2 to confirm that Customs had not performed any collection actions for the reasons noted above. We did not include any of the claims from this group in our population to be selected for testing, since no collection actions were taken and the claims were therefore outside the scope of our audit. Group 3 consisted of 2,052 CFP claims involving various violation codes for which amounts were partially collected or the claims were closed without payment. These claims totaled about $36 million, or less than 1 percent of the total dollar value of the entire population. We did not review the claims in this group because their average dollar amount was about $18,000 and their total dollar amount was deemed immaterial. Group 4 consisted of the remaining 147,702 CFP claims, with a total receivable amount of approximately $3 billion, for which Customs performed collection activities, made collections, and either closed the claims as paid in full or the unpaid amounts were still outstanding as of September 30, 2000. We sorted the population of CFP claims in this group into and performed certain steps for the following five strata: Stratum 1 consisted of individual claims with a receivable amount greater than $2.5 million. The receivable total for the 33 claims in this stratum was about $2 billion, or 68 percent of the receivable total for the entire 147,702 claims in group 4. Because of their high dollar value, we reviewed all 33 claims. We used these 33 claims to test various attributes of Customs’ processes for collecting CFP claims at each of the applicable FP&F offices. Stratum 2 consisted of all claims relating to a Customs broker that went out of business during fiscal year 1999. This broker’s bankruptcy resulted in 422 CFP claims with an original assessed amount of almost $566 million, of which about $484 million was recorded as CFP receivables. The September 30, 2000, receivables balance for these claims was about $484 million, or 16 percent of the dollar value of all 147,702 claims in group 4. We discussed the broker’s bankruptcy with Customs officials and limited our procedures to reviewing related documents to determine whether Customs took appropriate steps, in accordance with its policies and procedures, to assess and resolve CFP claims resulting from the event. Stratum 3 consisted of all claims identified in SEACATS as involving a late paperwork violation for failure to file a timely entry summary, which, in accordance with Customs’ policies, typically results in the payment of a minimal amount. The 50,806 claims in this stratum totaled about $10.6 million, or 0.4 percent of the dollar value of the 147,702 claims in group 4. During our walk-through at the San Francisco FP&F Office, we reviewed 3 claims that involved late paperwork violations and determined that the amount ultimately subject to collection by Customs might be as little as $100 per claim. As agreed with your staff, we performed no further review of this stratum. Stratum 4 consisted of all claims with an assessed amount less than or equal to $5,000. The 73,741 claims in this stratum totaled about $47.5 million, or 1.6 percent of the dollar value of the 147,702 claims in group 4. As agreed with your staff, we did not review the claims in this stratum because the dollar amounts of individual claims were low and the total dollar amount of all 73,741 claims was deemed immaterial. Stratum 5 consisted of the remaining 22,700 CFP claims, some open and some closed, that were not included in one of the first four strata. These claims totaled about $406 million, or 14 percent of the dollar value of the 147,702 claims in group 4. We sorted these claims by FP&F office and selected the four offices that managed the highest number of CFP claims during fiscal years 1999 and 2000. From the 7,747 claims for the four selected offices, we drew a random stratified sample of 179 claims, which totaled about $5.7 million, or 1.4 percent of this stratum’s dollar value and 0.2 percent of the dollar value of the 147,702 claims in group 4. We used these 179 claims to test various attributes of Customs’ processes for collecting CFP claims at the four selected FP&F offices. We nonstatistically selected an additional 20 CFP claims that Customs labeled as fraud violations (fraud claims) at the four selected FP&F offices. The cases were selected from the “FP&F Case Listing Logs for FY 2000” and the “1592/592 Cases Active During 2000” reports that were open as of September 30, 2000. The 20 claims were selected to further evaluate issues related to alleged fraud violations that we found when we tested the sample of CFP claims at the four FP&F offices. We selected the additional claims on the basis of such factors as assessed amounts, receivable amounts, loss of revenue, and expired statute of limitations. We performed detailed reviews of case files for all the CFP claims that we selected. We did not independently verify the completeness or accuracy of the data in the claims population or test information security controls over the systems used to compile the data because such verification was not necessary for the purposes of this request. We interviewed Customs representatives to obtain explanations for any significant trends and instances of noncompliance with Customs’ CFP debt collection policies and procedures, as well as about areas where enhancements could strengthen Customs’ processes. We also interviewed OMB and Financial Management Service officials to determine what roles, if any, OMB and the Financial Management Service play in overseeing and monitoring the government’s collection of CFP debt. We performed our work at Customs’ FP&F offices at five locations; the National Finance Center in Indianapolis, Indiana; and the Office of Regulations and Rulings in Washington, D.C., from September 2000 through March 2002. We conducted our work in accordance with generally accepted government auditing standards. We provided the Commissioners of Customs and the Financial Management Service and the Deputy Director of OMB’s Office of Federal Financial Management with a draft of our report for review and comment. We received general and technical comments from Customs. These comments are discussed in the “Agency Comments and Our Evaluation” section and appendix I of this report and are incorporated in the report as applicable. Customs’ and the Financial Management Service’s letters are reprinted in appendixes I and II, respectively. We did not reprint Customs’ technical comments. OMB stated that it had no comments. Customs, a bureau of the Treasury, provides the nation with its second- largest source of revenue. Customs assesses duties, taxes, and fees on goods brought into the United States from foreign countries. During fiscal year 2000, Customs reported $22.9 billion in collections of duties, taxes, and fees. Part of Customs’ collections consists of CFP, which are assessed when Customs determines that an importer violated trade and importation laws and regulations that Customs is responsible for enforcing. Fines and liquidated damages arise from importer/brokers’ violations of Customs’ bond agreements or trade laws and regulations (e.g., late filing or nonfiling of entry summaries). Penalties arise from violations of the federal laws and regulations governing the import and export of goods (e.g., commercial fraud, gross negligence, negligence, and customs broker and recordkeeping penalties). These laws and regulations contain guidelines for establishing the amount of fines and penalties to be assessed. Customs regulations establish guidelines and criteria for negotiation or mitigation to a lower fine or penalty amount to settle the case. Also, Customs regulations allow the violator and/or surety a period in which to file petitions challenging the fine or penalty amount assessed. Initial assessments are typically at the maximum amount provided for by law and vary according to type of violation. For example, the CFP-assessed amount for a violation under Commercial Fraud, Gross Negligence, and Negligence Penalties (19 U.S.C. 1592) ranges from a minimum of two times the loss of revenue for negligence to an amount not to exceed the domestic value of merchandise for fraud. For fiscal year 2000, Customs reported about $119 million in CFP debt collections and approximately $25.4 million of CFP debts that were written off. As of September 30, 2000, Customs reported about 7,180 outstanding CFP debts that represented about $773.6 million in gross receivables. Customs reduced this gross amount by about $36.4 million to adjust for validity and designated $675.7 million as uncollectible, resulting in a net receivable balance of about $61.5 million. In 1993, we reported that Customs did not effectively manage its collection process to prevent or minimize delinquent receivables. Factors that increased the likelihood of delinquent receivables included delays in finalizing amounts owed, poor monitoring of bond coverage, and delayed processing of protested bills. We also reported on the disparity between Customs’ gross receivables and the amounts expected to be collected. We noted during this review that Customs’ assessment and mitigation processes continued to be primarily responsible for the significant reductions in the recorded CFP receivable amounts compared to the amounts it expected to be collected. For the 4-year period covering fiscal years 1997 through 2000, Customs reported the following CFP receivables activity: $9.3 billion of recorded CFP assessments, $8.4 billion of recorded adjustments to reduce the originally recorded CFP assessments to reflect the postmitigation amounts that Customs collected or intended to collect, $257.0 million of CFP receivables balances that were collected, and $74.5 million of CFP receivables that were written off from balances that had typically been adjusted downward to the postmitigation amounts that Customs intended to collect. The initially assessed amount for a CFP claim should not be viewed as the actual postmitigation CFP receivable amount that Customs is likely to pursue for collection from importers. Certain Customs’ processes and practices typically result in significant reductions to initially assessed CFP amounts before or after a receivable is recorded. Customs records a CFP receivable once it determines that it has a legal right to the claim, which may or may not be the initial assessed amount of the CFP. Once a CFP claim is closed by either (1) payment, (2) termination of collection activity and write-off of a postmitigated CFP amount, or (3) cancellation of the debt, Customs records an adjustment to its CFP receivables that is equal to the difference between the original amount recorded as a receivable and the postmitigation amount that Customs intends to collect. Customs’ processes that often result in significant reductions to the initially assessed amounts or subsequently recorded receivables are Option 1, mitigation through a petition for relief or an offer in compromise, or cancellation of the CFP debt. Option 1. When Customs knows all the facts concerning an alleged violation at the time of the initial review and the harm to the government is readily quantifiable and understood, an Option 1 resolution may be possible. Option 1 can be described as a “parking ticket” approach. It involves payment of a preset amount, which eliminates the mitigation process and allows quick claim settlement. Specifically, Customs’ penalty notice to a violator includes (1) a CFP amount assessed in accordance with a Customs-related statute or regulation for the particular violation, which the violator is given an opportunity to petition for mitigation, and (2) a lesser, or Option 1, amount that the violator can accept in settlement of the case. The most common types of claims that can be resolved through Option 1 are those related to filing late paperwork (late filing of an entry summary, invoice, or other entry document). An Option 1 resolution generally results in a significant reduction of the amount initially assessed. Petition for Relief. A person who receives a notice of violation from Customs has 60 days to file a petition for relief or pay the amount initially assessed. Customs is not to establish a receivable until the petition period expires or until it reaches agreement on the amount of the CFP claim with a violator that has filed a petition. Customs may also accept a petition after the petition period expires. Petitioners may be granted mitigation for a number of reasons, including contributory Customs error, extraordinary cooperation with the investigation, immediate remedial actions, inexperience in importing, and prior good record. Mitigation of initial assessments may be substantial. For example, we reviewed a CFP claim involving delivery of restricted merchandise without Customs approval for which the violator was assessed $500,000. The violator filed a petition that explained that a clerical error had been made on the entry form. Based on the facts of the case and the additional information in the petition, Customs granted mitigation and reduced the amount of the claim to $250. In turn, Customs recorded a receivable amount of $250 and subsequently collected that amount. Offer in Compromise. An alleged violator may make an offer in compromise to settle a CFP claim at any time after the violation. Customs has specific authority under 19 U.S.C. 1617 and 19 CFR 161.5 to compromise claims, and Customs bases its decisions on whether to compromise claims on many factors, including, but not limited to, the following: (1) risk that the government may not recover a significant portion of the assessed amount if the claim is litigated and (2) the alleged violator’s financial inability to pay the initially assessed amount or the amount established through the petition for relief. The compromises generally result in significant reductions of the amounts initially assessed. An example of an offer in compromise that reduced an initial CFP assessment is a case in which Customs claimed gross negligence. Customs alleged that an importer made false statements and omitted costs such as development costs and royalty payments associated with video game cartridges. Customs issued a penalty notice to the alleged violator in the amount of $90,376 (four times the lost revenue of $22,594). The importer made an offer in compromise, proposing the payment of the lost revenue and a penalty of $2,295 in monthly installments with interest at 8 percent per annum. After Customs’ Regulatory Audit Division concluded that the importer might not be able to pay the full amount of CFP assessed, Customs accepted an offer in compromise of $28,543, which included interest of $3,654. Cancellation of a CFP Debt. After an initiating officer (discoverer of the alleged violation) makes a CFP assessment and the claim is forwarded to an FP&F office, the FP&F officer or other deciding official can decide to cancel a CFP debt and close the claim without attempting to collect any amount. The majority of cases closed without collection (canceled) in accordance with law and Customs guidance are closed for one of the following reasons: The penalty case is remitted in full without payment. The FP&F officer determines that a violation occurred, but the presence of substantial mitigating factors causes the officer to remit the penalty in full. The discretion to remit in full is provided in Customs’ mitigation guidelines and is based on a finding that a violation resulted from circumstances beyond the control of the violator or that the violator is without culpability. The automated record is canceled when there is an error in the input of a case into SEACATS that cannot be corrected by the case initiator. The canceled SEACATS record is usually replaced with a new case. The case is closed because there was no violation. The FP&F officer or other deciding official determines that the alleged violation did not occur or there was insufficient probable cause to support an alleged violation. According to a Customs official, such determinations occur because the receivables are recorded by the initiating officer who is responsible for sending notices to alleged violators before the cases are forwarded to the FP&F offices for review. After a notice is sent and a case is forwarded to an FP&F office, the deciding official determines whether the alleged violation occurred or can be sufficiently supported. In addition, the Customs official stated that when case initiators are determining whether a violation occurred, they often do not have the benefit of additional documentation and information presented with petitions for relief. Moreover, additional documentation is often only obtained through the course of discovery if a case goes to litigation. Customs’ gross CFP receivables increased by about $556 million from the beginning of fiscal year 1997 to the end of fiscal year 2000. According to Customs officials, claims resulting from the bankruptcy of a Customs broker, who handled a significant amount of import business on the U.S.- Canadian border, were primarily responsible for the increase in CFP receivables during the 4-year period. Such claims represented about 87 percent of the increase. Customs identified 422 CFP claims against importers associated with this broker and, in most cases, recorded the initial CFP assessments as CFP receivables. At the time of the bankruptcy in June 1999, Customs had not received entry filing summaries, payments of estimated duties, or both from numerous importers who had previously relied on the broker to handle such activities on their behalf. According to Customs officials, the bankruptcy of this broker was a unique situation for Customs, but it provides a clear illustration of the significant adjustments that can result from Customs’ assessment and mitigation processes. During fiscal years 1999 and 2000, Customs assessed numerous importers a total of about $566 million of CFP relating to the bankruptcy of the broker and recorded CFP receivables totaling about $484 million for these 422 claims. Customs records indicated that the remaining $82 million of assessed amounts was mitigated and thus these amounts were not recorded as CFP receivables. Postmitigated CFP amounts were nominal, consistent with Customs’ guidance. As of September 30, 2000, almost all of these receivables remained uncollected, and about $481 million had been recorded in a reserve account as amounts deemed uncollectible. The uncollected CFP receivables arising from the bankruptcy represented about 66 percent of Customs’ total reported gross CFP receivables and about 72 percent of Customs’ allowance for uncollectible CFP accounts as of that date. As of June 2001, Customs reported that 237 of the 422 CFP claims had been closed and that the outstanding related receivable balance was about $268.3 million. According to Customs officials, in examining the available entry records for each of the 422 alleged violations that resulted from the broker going out of business, Customs initially assessed each claim at the value of the affected merchandise or, in the case of restricted merchandise, up to three times the value of the merchandise when the value of the merchandise was known. If the value of the relevant merchandise was not known, Customs assessed each claim at $2.5 million, the amount of the bond posted by the broker. According to a Customs official, Customs’ assessment process and the Option 1 and mitigation processes generally resulted in Customs collecting considerably less than the CFP amounts initially assessed for the claims related to the bankruptcy of the broker. As discussed earlier, Customs’ guidance includes a range of mitigation amounts for each type of violation, as well as the authority to modify the assessed amount based on the particular facts and circumstances of any case. Customs based its mitigated CFP amounts for these claims on the CFP amounts suggested in its guidance for the late filing of an entry summary-–$100 to $200-–and gave consideration to the fact that the late paperwork resulting from the broker going out of business was out of the importers’ control. As of September 30, 2000, Customs’ records showed collections of $19,792 on 129 of the 422 CFP claims associated with the bankrupt broker. According to a Customs official, 5 of the 129 CFP claims totaling about $6,500 involved carnet violations in which the merchandise was not destroyed or exported. The assessments for these claims were collected in full. The other 124 CFP claims, which totaled about $67 million in assessed amounts, were mitigated, and the total remaining CFP receivable amount of about $13,200 was collected. The $19,792 collected on these 129 claims was relatively small because postmitigated CFP amounts were nominal, consistent with Customs’ guidance. At the end of fiscal year 2000, 293 claims remained open, of which the entire CFP receivable amount, net of the $2.5 million surety bond amount, was reserved in the allowance for uncollectible accounts. As of June 30, 2001, Customs’ records showed that 185 of the 422 CFP claims still remained open, meaning that from October 1, 2000, through June 30, 2001, Customs closed 108 CFP claims. Forty-seven of the 108 CFP claims were closed without collections because of errors. Customs attributed these errors to the broker going out of business and the unique efforts required by Customs to identify and assess CFP for all of the entries resulting from this bankruptcy. These closed CFP claims, representing a total assessed value of about $80.3 million, were canceled without any collections for the following reasons: Eighteen claims, valued at about $7.8 million, were subsequently deemed invalid because no violation occurred. Customs subsequently found entry summaries that were not entered into its Automated Commercial System (ACS) timely. Twenty-nine claims, valued at about $72.5 million, were subsequently deemed invalid and not violations because they were duplicates of already existing claims. Five of the 108 closed CFP claims, valued at $12.5 million, were reissued under new case numbers for fiscal year 2001 because the type of violation was incorrectly identified when the CFP was initially assessed. For each of the 5 cases, Customs closed the initial CFP claim and established a new claim that reflected the correct type of violation. The remaining 56 of the 108 CFP claims were closed for the following reasons after collections were made by Customs: Fifty of the 56 CFP debts were reclassified from violations for not filing entry summaries to violations for filing late entry summaries, once the importer subsequently filed the entry summary. In addition, according to a Customs official, the debts involved duty-free merchandise and each claim was subsequently reduced to a nominal amount in accordance with Customs’ mitigation guidance. Each claim was closed after the violator paid a postmitigation amount of $50, resulting in a total amount collected of $2,500. Before mitigation, the total assessed amount on these 50 CFP debts was $125 million. Six of the 56 CFP debts, initially assessed at a total of about $128,000, reflected a receivable amount of $700 of which a total of $450 was collected. Customs can strengthen its CFP debt collection and might improve its collection efforts by enhancing and better adhering to existing policies and procedures. We found several CFP policies and procedures that can be strengthened through enhancements. These enhancements represent good management practices and could enable Customs to collect more CFP amounts. The needed enhancements relate to (1) using promissory notes to collect CFP debt when the debtor has significant assets, (2) obtaining evidence that CFP claims related to carnets were received by the guaranteeing association, (3) determining the adequacy of surety bond coverage for CFP debts, and (4) obtaining evidence of CFP debtors’ inability to pay CFP debt. A debtor may indicate to Customs that it is financially unable to pay a CFP debt in a lump sum. Customs’ policy in cases where the debtor is unable to pay the full amount is to use a promissory note to collect the debt. However, the policy does not require that debtor assets secure the promissory note. Obtaining secured promissory notes in certain situations, such as when the debtor has significant assets, is a good business practice and increases the likelihood of collecting the amounts promised by the debtor because Customs would have a claim against the secured assets in the event of debtor default. We reviewed three CFP claims (one high-dollar claim and two nonstatistically selected alleged fraud claims) in which unsecured promissory notes were used. In one of these alleged fraud claims, Customs accepted a $140,000 unsecured promissory note from a debtor even though Customs’ Regulatory Audit Division determined that sufficient assets were available to cover the debt when the note was executed. At the time of our review, the note was in default and the debtor had paid only about $43,000 of the $140,000 owed. If Customs had obtained a secured promissory note, it would have been in a better position to collect the remaining unpaid CFP amount. Customs has 1 year from the expiration date of a carnet to issue a CFP claim. Approved associations issue carnets, which are valid for 1 year, and guarantee any Customs claims associated with the merchandise covered. Customs has designated the U.S. Council for International Business as the issuing and guaranteeing association in the United States for carnets. If the 1-year period covered by a carnet expires and the covered merchandise has not been exported or destroyed, a CFP claim arises. Establishing a CFP claim is time critical because, under Customs guidance, Customs may not make a CFP claim against the council more than 1 year after the expiration of a carnet. For properly established claims, the council must pay the claim unless it furnishes Customs with proof within 6 months of the date of the claim period that the merchandise was returned, exported, or destroyed. During our review, we found that Customs does not always obtain documentary evidence that the council received CFP claims related to carnets within the 1-year period. As stated in Standards for Internal Control in the Federal Government, all transactions and other significant events need to be clearly documented, and documentation should be readily available. For the eight CFP claims we reviewed involving carnets, we found that Customs did not maintain the necessary documentation. Customs had records indicating that the CFP claims were issued within the required 1-year period. However, Customs could not prove that the council received the CFP claims within the required 1-year period and did not have evidence that it contacted the council as a follow- up to the issuance of the CFP claim within the 1-year period. The contact helps ensure that CFP claims were received by the council or enable Customs to provide a copy of the claim before the 1-year period expires. An FP&F paralegal asserted that Customs made frequent telephone contacts with the council to verify receipt of notices of carnet expirations but did not document the contacts. Subsequent to discussing the 8 carnet claims that were from one FP&F office, Customs stated that a total of approximately 300 carnet-related CFP claims issued by that office from 1996 through 2000, totaling about $1.8 million, were still outstanding. Customs stated that the council alleged it did not receive violation notices for these claims within the 1-year period and therefore has declined to pay the claims. Customs also stated that since it could not prove that the council received the notices within the required 1-year time frame, collections would be minimal on these claims. Customs officials instituted a process in fiscal year 2000 requiring that notices of carnet violations be sent by registered mail so that Customs would have proof of the council’s receipt of the notices. In February 2002, Customs stated that the outstanding CFP claims involving carnet violations were canceled without payment. Without documentation to prove that the council received these claims, Customs determined there was a reasonable likelihood that the port office did not issue them timely. Customs also stated that its Office of Regulations and Rulings has drafted claim issuance and mitigation guidelines for carnet violations. On April 19, 2002, Treasury Directive 02-20, setting forth claim issuance and mitigation guidelines for carnet violations, was published in the Federal Register. Customs did not have adequate surety bond amounts to cover all CFP entries we reviewed. Customs regulations require that importers maintain bonds as insurance against losses to Customs from unpaid duties, taxes, charges, and CFP amounts for liquidated damages claims and certain penalty claims associated with violations of the international carrier bonds. Single-entry bonds cover merchandise listed on a single-entry summary and are attached to entry summaries filed with Customs. Continuous bonds cover multiple entries for a specified period and are generally maintained on file at the port of entry. Out of 83 statistically selected open CFP claims, we found six continuous entry bonds involving liquidated damages that were not adequate to protect Customs from losses resulting from unpaid duties, taxes, charges, and CFP amounts. Based on our analysis of the selected open CFP claims, we estimate that about 7.2 percent of the 615 open CFP claims managed by the four selected FP&F offices did not have sufficient amounts to cover duties and CFP. Specifically, we found that 2 of the statistically selected CFP claims involved insufficient continuous-entry bond amounts to cover in total about $101,000 out of about $201,000 of assessed antidumping fees and charges. At the time of our review, both of the claims had been referred to Customs’ Office of Chief Counsel to determine potential for litigation. We also found four instances in which the importer’s continuous-entry bond was sufficient to cover the claim we reviewed but was not sufficient to cover other Customs’ claims against that importer. For the four instances, the bond insufficiencies included about $668,000 out of about $768,000 of fees and duties and about $686,000 out of about $1.2 million of CFP. At the time of our review, the claims for one of the instances had been referred to Customs’ Assistant Chief Counsel; one of the instances had been resolved in favor of the importer; one of the instances had been referred to the Department of Justice for litigation since the surety stated that the related continuous bond had already been exhausted on another claim; and one of the instances had been settled against the bond. In the instance that the claims were settled, Customs did not collect about $200,000 of the CFP that was in excess of the bond coverage. Customs implemented new procedures for use with the dedicated bond liability module of the ACS to address the recommendations that we previously reported. However, Customs officials stated that additional changes to ACS were postponed so that the changes can be incorporated into the new tracking system, the Automated Commercial Environment (ACE), which will replace ACS. These officials said that Customs is proceeding with the requirements development task for ACE as part of the agency’s automated systems modernization project. Upon completion of this task, Customs will know more about the wide range of business requirements that the new system must address, which will include surety bond tracking capability. Until the task is completed, however, Customs cannot determine when the development and implementation of the ACE system will be completed and lacks a reliable way to determine on a real- time basis whether coverage on continuous bonds is sufficient for a given entry. As a result, Customs’ system capability problems will continue to undermine FP&F offices’ ability to track the sufficiency of bonds and Customs officials’ ability to administer Customs laws. Customs officials stated that the bond information in the various systems that process entries and penalties is not real time, does not aggregate potential debts against bonds, and does not show reductions in the bond amounts to reflect actual amounts paid. Specifically, an FP&F official stated the following: Customs’ bond sufficiency report compares the current bond amount with 10 percent of the importer’s dutiable imports for the prior year. Since only prior-year detail is used, this historical comparison does not take into consideration the potential debt for entries such as temporary importation bonds or for increased import activity in the current year. Customs’ ACS does not provide adequate information to determine whether the current single- or continuous-entry bond is sufficient to cover a current entry, without significant research of the system. The system does not show the potential duties, fees, and CFP of an entry against a bond amount or actual payouts against that bond. Customs’ SEACATS provides a notice if an individual entry’s total duties, fees, and CFP exceed a bond amount, but it does not accumulate information on the duties, fees, and CFP from other entries at a particular port that have already been applied to the current bond. It is also unlikely that Customs staff at one port would be aware of other duties, fees, and CFP charged against the bond if an importer enters merchandise at other ports that are also covered by that bond, since SEACATS does not accumulate this information. Customs’ monthly bond liability report provides information that is necessary to alert importers of the need to increase their current bonds. However, Customs is still susceptible to surety bond amounts that are insufficient to cover duties and CFP on entries made prior to when an importer actually increases a bond amount. During the period of our review, Customs regulations required debtors who claimed they were unable to pay CFP debts to present documentary evidence to support their claims. Examples of documentary evidence that were to be provided by the debtor included copies of income tax returns, current financial statements, and independent audit reports. However, Customs was not required to obtain and review independent audit reports, which include audited financial statements, to determine whether a debtor was not able to pay CFP debt. We reviewed six CFP claims (two high-dollar claims and four nonstatistically selected fraud claims) in which the debtor’s representation of inability to pay was a factor in Customs’ petition or offer-in-compromise process, consistent with Customs policy. For five of these CFP claims (two high-dollar claims and three fraud claims), each of which involved fraud or counterfeiting, Customs obtained tax returns and/or current financial statements from the debtors. Customs records indicated that through mitigation and the use of offers in compromise, the originally assessed CFP amounts totaling about $28.7 million were reduced to a total CFP receivable amount of about $1.5 million, of which only about $108,000 had been collected through March 2002. We asked for, but Customs could not provide, documentation of the debtor’s inability to pay the sixth CFP claim. Customs was not required to obtain and review independent audit reports, such as audited financial statements, as part of the documentary evidence it used to determine these debtors’ inability to pay. However, in June 2000, Customs regulations were revised, requiring that both income tax returns for the past 3 years and recent audited financial statements be provided by parties claiming they were unable to pay CFP debts. In addition to the enhancements to its collection capacity discussed above, Customs needs to adhere more closely to certain of its existing policies and procedures. We found instances in which Customs did not always follow its policies and procedures related to (1) requesting waivers of the statute of limitations for CFP debts, (2) issuing Notices of Penalty or Liquidated Damages Incurred and Demand for Payment (penalty and payment notices), (3) responding to violators that filed petitions for relief, and (4) issuing notices of redelivery. Customs does not always timely request waivers of the statute of limitations for CFP debts to enable it to have sufficient time to continue collection actions. In order to obtain a waiver, which extends the statute of limitations by the amount of time agreed to in the waiver, Customs guidance requires the appropriate FP&F office to request a waiver of the statute of limitations from the violator when less than 2 years remain before the expiration of the statute of limitations. In instances in which Customs determines that a waiver is necessary, its policy is to request that the violator agree to a 2-year waiver of the statute of limitations. If a waiver is not obtained, Customs is to refer the CFP debt to the Department of Justice no later than 6 months before the expiration date of the statute of limitations to allow Justice sufficient time to file the CFP claim with the Court of International Trade. We identified two claims (one high-dollar claim and one nonstatistically selected fraud claim) for which Customs requested waivers for some of the entries only shortly before the statute of limitations was to expire for those entries. An alleged violator and two of the three sureties for a second alleged violator did not agree to the waivers, and the statute of limitations subsequently expired on various entries. In one case, Customs originally stated that it attempted to obtain waivers of the statute of limitations from three sureties after the dissolution of the importer and the completion of its investigation, which was 4 years later. Only one surety granted a waiver, and the statute of limitations expired without any collections from the other two sureties. In the other case, Customs stated that it had filed a complaint with the Court of International Trade to prevent the expiration of the statute of limitations. However, the complaint applied to only 53 of 104 entries, since the statute of limitations had expired on the other 51 entries prior to Customs’ filing the complaint. These cases illustrate the risks of not timely attempting to avoid expiration of the statute of limitations. For these cases, Customs was unable to collect original duties that totaled about $74,000 and CFP that totaled about $136,000. In contrast, it was able to collect about $97,000 of unpaid duties for entries on which the statute of limitations had not expired or for which waivers had been obtained. In a third case that we identified during our walk-through, we found a CFP claim involving, among other factors, a failure to request a timely waiver of the statute of limitations. The statute of limitations for the entries filed in the first 3 of the 7 years under this CFP claim had expired. In this nonstatistically selected CFP fraud case, Customs assessed $21 million of CFP against an importer for allegedly using false invoices to undervalue entries. While engaged in settlement talks with Customs, the importer made distributions totaling about $6 million to its two principal stakeholders. Before the expiration of the statute of limitations for the remaining 4 years of entries, Customs was granted a waiver and subsequently accepted an offer in compromise to settle the CFP claim for $700,000, consisting of $688,025 to cover the amount of the duties owed to Customs and $11,975 of the $21 million assessed CFP. Other factors that contributed to Customs’ acceptance of this offer in compromise were its determination that (1) the corporation was unable to pay the CFP debt after $6 million was distributed to stockholders and (2) Customs might not be able to hold the principals personally liable for the debt. In general, Customs paralegals cited both a lack of a tracking system and human error for the poor tracking of the statute of limitations’ expiration dates. Even though a tracking system for statute of limitations expiration dates could improve Customs’ ability to track expiration dates, our review found that monthly reports from SEACATS currently provide sufficient statute of limitations information to paralegals at each office for tracking expiration dates. During our review of the eight high-dollar CFP claims that were written off during fiscal years 1998 through 2000, we found four cases in which the expiration of the statute of limitations and Customs’ decision to terminate collection activity were the primary reasons for the write-off of these CFP debts. Customs wrote off about $27 million for these four CFP cases, which represented about 39 percent of the CFP amounts written off during the 3-year period. Even though we identified lengthy collection efforts-– investigations, petition processes, and information gathering and review prior to decisions on petitions by Customs’ Office of Regulations and Rulings or Office of General Counsel, or decisions on whether to refer them to Justice-–that took several years, Customs ultimately deemed the debts uncollectible and legally without merit. The reasons for these determinations were the violators either (1) filed for bankruptcy after being assessed by Customs or (2) went out of business after being assessed by Customs. We also noted an instance where the approaching expiration of the statute of limitations was a contributing factor in Customs accepting a lower mitigation amount for the CFP claim. Specifically, in one case we reviewed, Customs accepted an offer in compromise for $25,000 in April 2000, after the statute of limitations had expired for 252 of 257 entries relating to a CFP claim. These entries involved fraud violations where the importer allegedly undervalued the entries in an effort to avoid paying duties. Customs issued the penalty notice on April 6, 2000, at the value of the merchandise, which was about $20.1 million. The investigation relating to this claim occurred from 1995 through 1998, but the proceedings did not commence until March 2000. The offer in compromise was accepted because Customs could not determine the actual unpaid duties since its files did not include the amount of lost revenue relating to all of the entries. Customs did not always issue penalty and payment notices to importers within 10 days of opening a case file in its tracking system, in accordance with Customs guidance. Customs guidance states that a penalty and payment notice be issued to importers within 10 days of when Customs opens a case in SEACATS. We found that Customs did not comply with this requirement for 16 (2 open and 14 closed CFP claims) of the 179 statistically selected CFP claims we reviewed. For example, a penalty and payment notice for 1 of these claims in the Los Angeles FP&F Office was 192 days late. Twelve of the cases were identified at the New Orleans FP&F Office, and 2 were identified at the Los Angeles FP&F Office. Based on our evaluation of the open and closed CFP claims, we estimate that 2.4 percent of open and 14.5 percent of closed CFP claims managed by the four selected FP&F offices did not have penalty and payment notices issued within the 10-day period. Customs paralegals responsible for managing these CFP claims generally attributed the delays to limited staff resources. Such delays may have resulted in reduced collections of CFP at the four selected FP&F offices. Customs did not consistently respond timely to violators that filed petitions for relief. After receiving a notice from Customs, the alleged violator has 60 days to file a petition for relief and Customs has 90 days after receipt of the petition to respond. We found that 100 (40 open and 60 closed CFP claims) of the 179 statistically selected CFP debts involved petitions. For 25 of these 100 claims, Customs did not comply with the 90-day requirement. For example, 1 claim in the John F. Kennedy Airport FP&F Office was 364 days late. Twelve instances of noncompliance were identified at the New Orleans FP&F Office, and 11 instances at the Los Angeles FP&F Office. Based on our evaluation of the 40 open and the 60 closed CFP claims that had petitions, we estimate that for 22.3 percent of open and 26.6 percent of closed CFP claims managed by the four selected FP&F offices, Customs did not respond to the petitions within the 90-day period. Customs paralegals responsible for managing these CFP claims generally attributed the delays to limited staff resources. Such delays may have resulted in reduced CFP collections at the four selected FP&F offices. The timing of Customs’ issuance of Notices of Redelivery, which are sent to importers when the Department of Health and Human Services’ Food and Drug Administration (FDA) deems goods unsafe for importation, raised legal issues that affected settlement determinations. The Food, Drug, and Cosmetic Act authorizes the Secretary of Health and Human Services to refuse admission of food, drugs, devices, and cosmetics for a number of reasons, including that the items were packed under unsanitary conditions or that the articles are adulterated or misbranded. When FDA makes a determination to refuse admission of goods, it issues a Notice of Refusal of Admission to the importer. While awaiting an admission decision from FDA, Customs may authorize delivery of the article to the owner or consignee upon the execution of a bond sufficient to pay liquidated damages in the event of default. Customs regulations establish conditions for importation and entry bonds. One of the conditions is that the importer must timely redeliver released merchandise on demand to Customs after receiving a redelivery notice. Customs must issue the redelivery notice no later than 30 days after the date of release of the merchandise or 30 days after the end of the conditional release period, whichever is later. Failure to redeliver could result in the importer’s having to pay liquidated damages under the bond. Disputes over when Customs has to issue redelivery notices for articles subject to FDA approval have affected Customs’ collection of assessments. For example, in United States v. Likas International, Inc. and Washington International Insurance Company, a surety denied liability under a bond because Customs issued the redelivery notice more than 30 days after FDA issued a refusal notice. The government asserted that it had 120 days to issue the redelivery notice because the importer retained custody of the article for 90 days after the FDA refusal notice, which ended a conditional release period, and then Customs had an additional 30 days to issue a redelivery notice. During the Likas proceedings, another significant issue arose: Has Customs defined a conditional release period for the FDA context? The government essentially argued that the conditional release period automatically began when Customs delivered articles subject to FDA approval to the importer. The surety argued that Customs had never by regulation defined a conditional release period in the FDA context. The surety further argued that Customs’ assertion of a conditional release period amounted to an indefinite period because the period would run from delivery of the goods until 90 days after FDA issued its notice, whenever that occurred. As a result of the issues raised in Likas, Customs and the Department of Justice in the summer of 1999 settled the Likas case with a number of sureties. As part of the Likas settlement, Customs agreed to implement a nationwide policy for issuing a redelivery notice following FDA’s issuance of a refusal notice. The policy provides that Customs must issue a redelivery notice no later than 30 days after FDA issues a refusal notice. In return, the affected sureties agreed to settle all outstanding claims for liquidated damages in which the Customs redelivery notice was issued more than 30 days but less than 120 days after issuance of the FDA refusal notice and all cases where Customs’ redelivery notice was issued more than 30 days after the release of the merchandise. The sureties agreed to pay 30 percent of the value of the merchandise or the amount of the bond, whichever was less. We reviewed eight CFP claims with redelivery notices issued before fiscal year 1999 (one statistically selected and seven nonstatistically selected at Customs’ Los Angeles and San Francisco ports, respectively) and found that the redelivery notices were issued after 30 days but within 120 days of the issuance of the refusal notices. The total assessed amount of these claims was about $686,000, and the total amount collected was about $138,000. The discussion above indicates how a lack of certainty and clarity concerning a conditional release period may affect the enforcement of importation and entry bonds. Customs has advised us that it has prepared a Notice of Proposed Rulemaking to amend its regulations to provide a specific conditional release period in all cases involving products regulated under the Food, Drug, and Cosmetic Act. The Notice of Proposed Rulemaking is currently awaiting departmental approval prior to its publication in the Federal Register. OMB and Treasury’s Financial Management Service are provided information useful in performing their debt oversight roles through Customs’ reporting of CFP receivables and referral of CFP debt to the Financial Management Service for collection. Beginning with financial statements for fiscal year 1997, Customs has disclosed CFP receivable information in the notes to its audited financial statements, which are submitted annually to OMB. In addition, in accordance with the requirements of the Debt Collection Improvement Act of 1996, Customs annually reports receivable information, which includes CFP receivable information, to the Financial Management Service as part of the Report on Receivables Due from the Public. In discussions, OMB officials emphasized that their oversight responsibility is broad and consists of monitoring and evaluating governmentwide credit management, debt collection activities, and federal agency performance. OMB also stated that it is the specific responsibility of agency chief financial officers and program managers to manage and be accountable for the debt collection of their agency’s credit portfolios, including debt collection, in accordance with applicable federal debt statutes, regulations, and guidance. OMB further added that it is the role of each agency to specifically monitor and collect its civil penalty debt regardless of dollar magnitude and that it is the responsibility of each agency’s office of inspector general to provide oversight through audit of the agency’s debt collection activities. The Debt Collection Improvement Act of 1996 requires that federal agencies transfer eligible nontax debt or claims delinquent more than 180 days to Treasury for collection action. Treasury officials stated that they rely on agencies to determine what debt should be referred to the Financial Management Service for collection and offset as required by the Debt Collection Improvement Act of 1996. A Customs representative stated that certain CFP debts are referred to the Treasury Offset Program for collection via the Tax Refund Offset Program. The growth in Customs’ uncollected CFP debt resulted primarily from assessments to importers that were caused by a broker going out of business. However, a substantial portion of Customs’ recorded CFP receivables will continue to be deemed uncollectible and eventually reduced, since it represents amounts that are required to be assessed in accordance with Customs’ guidance rather than the smaller portion that is typically pursued for collection from importers after mitigation or settlement of a claim. Even though Customs’ assessment process will continue to result in significant adjustments, there are several areas where Customs’ CFP debt collection policies and procedures can be strengthened and its collection efforts might improve through enhancements or increased adherence. These areas include Customs’ ability to track the sufficiency of surety bond coverage, a concern we originally raised in 1993, which will not be addressed until Customs completes the implementation of the new ACE system. We are making several recommendations to the Commissioner of the U.S. Customs Service to strengthen Customs’ CFP debt collection policies and procedures, improve the collection of CFP debt, and decrease the amount of CFP receivables that are reduced or written off. We recommend that the Commissioner of the U.S. Customs Service direct the Assistant Commissioner, Office of Finance, to develop and implement detailed CFP debt collection policies and procedures to obtain secured promissory notes from CFP debtors when evidence shows that they have significant assets to secure their CFP debts. We recommend that the Commissioner of the U.S. Customs Service direct the Assistant Commissioner, Office of Regulations and Rulings, to expeditiously establish conditional release periods for products regulated under the Food, Drug and Cosmetic Act. We recommend that the Commissioner of the U.S. Customs Service direct the Assistant Commissioner, Office of Information and Technology, to help ensure that the development and implementation of Customs’ new ACE system addresses bond sufficiency concerns cited in this report and in our 1993 report. We recommend that the Commissioner of the U.S. Customs Service direct the Assistant Commissioner, Office of Field Operations, to reinforce and monitor the four selected Fines, Penalties, and Forfeitures offices’ compliance with certain existing CFP debt collection policies and procedures, where applicable, to help ensure that statute of limitations waivers are requested when less than 2 years remain before the expiration date and waivers are obtained before the statute of limitations expires to allow adequate time for actions to be taken against violators by Customs, the Department of Justice, and the Court of International Trade; Notices of Penalty or Liquidated Damages Incurred and Demand for Payment are issued to importers within 10 days of Customs’ opening a case in the CFP tracking system; and responses to petitions for relief are made to violators within 90 days of Customs’ receipt of a petition from a violator. In commenting on a draft of our report, Customs and the Financial Management Service agreed with our recommendations. Customs described actions being taken to address each recommendation. We have removed our recommendation for Customs to finalize its claim issuance and mitigation guidelines for carnet violations since these were published in the Federal Register on April 19, 2002. Customs also provided general comments, which are reprinted in appendix I and followed by our evaluative comments. Customs provided a number of technical comments that are incorporated in the report as appropriate. OMB stated that it had no comments. As agreed with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies to the Chairman of your subcommittee and to the Chairman and Ranking Minority Member of the Senate Committee on Governmental Affairs. We will also provide copies to the Secretary of the Treasury, the Commissioner of the U.S. Customs Service, and the Director of the Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 3406. The GAO contact and staff acknowledgments are listed in appendix III. The following are GAO’s comments on the U.S. Customs Service’s letter dated May 15, 2002. 1. See “Agency Comments and Our Evaluation” section. 2. We have inserted a footnote in the report to explain liquidated damages. However, our report does not focus on revenue, but rather on Customs’ collection of CFP receivables in accordance with its policies and procedures. 3. We have revised the report to clarify Customs’ policy regarding when to request a waiver. 4. We fully considered the previous comments provided. We revised the discussion of the two CFP claims to focus on the issue of timeliness for requesting waivers of the statute of limitations. We augmented our discussion in the report related to the two cases for which Customs did not timely request waivers of the statute of limitations and lost opportunities for further collections. 5. Our discussion regarding eight high-dollar CFP claims that were written off during fiscal years 1998 through 2000 focused on four cases in which the expiration of the statute of limitations and Customs’ decision to terminate collection activity were the primary reasons for the write-off of the CFP debts. We did not discuss the three cases that involved penalties related to seizures of foreign-owned conveyances used to smuggle narcotics that were seized, forfeited, and sold. For the four cases discussed in this report, the expiration of the statute of limitations was one of Customs’ cited reasons for deeming the debts uncollectible and legally without merit, which led to writing off the debts. Even though we cited Customs’ reason for these determinations as the violators either subsequently filing for bankruptcy or going out of business, we also identified lengthy collection efforts where Customs’ investigations, petition process, and information gathering and review prior to decisions on the petitions or referral took several years. 6. We agree that the expiration of the statute of limitations was not in jeopardy as it related to the entries for the last 4 of the 7 years for this case. However, our report focused on the expiration of the statute of limitations on the entries filed in the first 3 of the 7 years under the CFP claim. We also believe it was important to note that Customs only collected $11,975 of the $21 million CFP assessment when it accepted the offer in compromise and that $6 million was distributed to two principal stockholders during Customs’ collection efforts on this CFP claim. We moved this case from the section that discusses instances where the approaching expiration of the statute of limitations was a contributing factor in Customs’ acceptance of lower mitigation amounts for CFP claims to the section that discusses instances in which the statute of limitations expired. 7. We reviewed and considered the information that Customs provided to us regarding our preliminary findings. For the two cases for which Customs asserts the penalty notices were issued on the same day as the case records were input into SEACATS, we were not provided adequate documentation to support the assertion. 8. We have revised the report to reduce the number of exceptions from 17 to 16. Since penalty notices should not be issued for cases involving penalties related to the seizure of counterfeit trademark infringing merchandise, we removed this case from the reported exceptions. It should be noted that during our fieldwork and subsequent follow-up after our exit meeting, we had several discussions and were provided additional explanations and documentation on this issue. However, until Customs’ written response to the draft report, Customs had not indicated that the Los Angeles case that was reported 456 days late should not have been included. 9. As our report states, we found that in 25 of 100 claims involving petitions, Customs did not comply with its requirement to respond within 90 days of receipt of the petition. We estimated that for 22.3 percent of open and 26.6 percent of closed CFP claims managed by the four selected FP&F offices, Customs did not respond to the petitions within the 90-day period. We do not believe such results demonstrate only inadvertent noncompliance with the 90-day requirement. However, we have revised the report to clarify that Customs did not consistently respond to petitions within 90 days of receipt. 10. While Customs’ records indicated that it did not respond to petitions by importers until after 90 days for all 29 cases, Customs stated it had an informal process to extend the 90-day period for the number of days the petitions were outside of the FP&F office. In its response to a draft of this report, Customs informed us that the Commissioner formalized this process on January 21, 2002, in Customs’ Seized Asset Management and Enforcement Procedures Handbook. Customs subsequently provided us the four case numbers and documentation to support (1) the number of days the cases were out of the FP&F offices and (2) that Customs responded within the 90-day period, in accordance with its informal process that was finalized in January 2002. The four cases included the Los Angeles case that involved an FDA refusal of admission. As a result, we revised the report to reflect that Customs did not respond to petitions by importers until after 90 days for 25 cases. 11. We have modified the report to focus on the effect on the reduction in CFP collections that may have resulted from delays in responding to violators that filed petitions for relief. As industry statistics show, the likelihood of recovering amounts owed decreases dramatically as the age of the delinquency increases. 12. We clearly stated that Customs did not consistently issue redelivery notices to importers and that each of the selected cases reviewed occurred prior to when Customs established its new guidance in fiscal year 1999. We also pointed out that Customs is currently addressing the one outstanding legal issue that resulted from the lawsuits and settlement. In fact, our recommendation only addresses the need for Customs to expeditiously establish a conditional release period. Mario Artesiano, Rathi Bose, Sharon Byrd, Richard Cambosos, Perry Datwyler, Mickie Gray, David Grindstaff, Marshall Hamlett, Fred Jimenez, Eric John, Laurie King, Victoria Lin, Jon Ling, John Lord, Mel Mench, Suzanne Murphy, and Maria Stortz made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | GAO reviewed the Customs Service's management of and practices for collecting civil fines and penalties (CFP) debt. GAO found that Customs' gross CFP debt more than tripled from the start of fiscal year 1997 to the end of fiscal year 2000, rising from $218.1 million as of October 1, 1996, to $773.6 million as of September 20, 2000. During the same period, Customs annually reserved from 75 to 87 percent of its reported CFP receivables in an allowance for uncollectible accounts. The primary reason for the growth in Customs' reported uncollected CFP debt from fiscal year 1997 through fiscal year 2000 was the bankruptcy of a Customs broker in fiscal year 2000. The broker's bankruptcy resulted in Customs assessing 422 claims for $566 million and recording CFP receivables totaling $484 million during fiscal years 1999 and 2000. The remaining $82 million of assessed amounts was eliminated through the CFP mitigation process, and accordingly these amounts were not recorded as receivables. Customs can strengthen some of its CFP debt collection policies and procedures both by enhancing them and better adhering to them. The Office of Management and Budget stated that it had broad oversight responsibility for monitoring and evaluating governmentwide debt collection activities, but that it is the specific responsibility of the agency's office of inspector general to provide oversight through audits of the agency's debt collection activities. In addition, the Financial Management Service (FMS) officials stated that they rely on agencies to determine what debt should be referred to FMS for collection and offset as required by the Debt Collection Improvement Act of 1996, and Customs refers certain delinquent CFP debts to FMS for collection action. |
According to recent U.S. government reports, the U.S. heroin addict population, which had remained stable at about 500,000 persons for nearly two decades, has risen and is now about 600,000 or higher. The Office of National Drug Control Policy (ONDCP) estimates that Americans now consume 10 to 15 metric tons of heroin annually, an increase from the estimated 5 tons consumed during the mid-1980s. In comparison with the 1980s, heroin now has an added appeal to users because it is more potent—containing higher purity levels than in the past. For example, average purity for retail heroin in 1995 was about 40 percent compared to about 7 percent a decade ago. As a result of increased purity, heroin can now be snorted or smoked and the user is freed from the added threat of contracting AIDS through a contaminated needle. In addition, there is a reported increase in the number of multiple-drug users who are using both heroin and crack cocaine. Opium poppies, from which heroin is derived, are grown primarily in three regions of the world—Southeast Asia, Southwest Asia, and Mexico and South America. According to the Department of State, worldwide opium production has nearly doubled since 1987—increasing from about 2,200 to nearly 4,200 metric tons in 1995. In 1995, the Southeast Asia region was the source of approximately 75 percent of the world’s opium poppy cultivation and 62 percent of the world’s estimated opium production. The bulk of the remaining cultivation and production occurred in the Southwest Asia region (primarily Afghanistan), accounting for about 20 percent of worldwide opium poppy cultivation and over 35 percent of opium production. Cultivation in the region comprised of Mexico and South America accounted for only about 5 percent of worldwide opium poppy cultivation and 3 percent of opium production. Nevertheless, DEA reported on September 3, 1996, that South America became the predominant source area for heroin seized in the United States during 1995. Southeast Asian opium production has increased by about 2-1/2 times—from just under 1,100 metric tons in 1987 to nearly 2,600 metric tons in 1995. About 87 percent of the opium poppy cultivation and 91 percent of the opium production in Southeast Asia occurred in Burma—primarily in Burma’s eastern Shan State. (See app. I.) In addition, the State Department reported that, in 1995, Burma was a major supplier of heroin to the United States. From its estimated yield of 2,340 metric tons of opium gum, Burma had the potential to produce an estimated 230 metric tons of heroin—enough to meet U.S. demand many times over. U.S. funding of heroin control efforts accounts for a small portion of the overall international drug control budget. ONDCP estimated that, during fiscal year 1994, the United States spent $47.5 million on international heroin control activities, or about 14 percent of its international narcotics control budget. In Burma, Hong Kong, and Thailand, as of June 30, 1996, DEA had a total of 43 permanent staff, while the State Department has 7 staff assigned to its Narcotics Affairs Section in Thailand and none in Burma or Hong Kong. In Burma and China—two key countries involved in heroin cultivation, production, and trafficking—the State Department has no Narcotics Affairs Sections, while DEA has only three staff—all in Burma. Other U.S. efforts in the region include intelligence analysis support for U.S. law enforcement agencies, and equipment and training for host nation counternarcotics forces provided by the Joint Interagency Task Force-West, based in California, and the Department of Defense’s Pacific Command. The U.S. international heroin strategy addresses the worldwide threat but focuses on Southeast Asia because this region is the primary source and includes major trafficking routes for heroin imported into the United States. The strategy places special emphasis on reducing Burmese opium production as a key to decreasing the regional flow of heroin into the United States. However, the United States faces the following significant obstacles in implementing this approach: Since 1988, the United States has not provided direct counternarcotics assistance to Burma because of its record of human rights abuses and its refusal to yield control of the country to a democratically elected government. Much of Burma’s opium-producing region is not under the effective control of the Burmese government. Due to unique trafficking patterns, law enforcement efforts against the criminal organizations responsible for moving heroin from Southeast Asia into the United States have not been effective. The lack of law enforcement cooperation between the United States and China continues to impede interdiction of key heroin-trafficking routes. Although the U.S. international heroin strategy was signed by the President in November 1995, guidelines to U.S. counternarcotics agencies for implementing the strategy are still under review. The United States does not have a significant counternarcotics program in Burma because of U.S. concerns over human rights violations by the Burmese government and the unwillingness of the Burmese government to yield control of the country to a democratically elected government. In 1988, the United States discontinued foreign aid to Burma, including direct counternarcotics funding support, because Burmese military forces violently suppressed antigovernment demonstrations for economic and political reform and began establishing a record of human rights abuses. Furthermore, the military regime refused to recognize the results of national elections held in 1990 and, for decades, has engaged in fighting with insurgent armies who represent ethnic minority groups seeking autonomous control of territory within Burma. Some of these minority groups control major opium production and heroin-trafficking areas. Currently, the United States provides only limited low-level law enforcement cooperation, such as information sharing. U.S. policy restricts direct counternarcotics assistance until the Burmese government improves its human rights stance and recognizes the democratic process. In addition, the President has denied certification for counternarcotics cooperation since 1989. According to State Department officials, there has been no improvement in the political and human rights situation, and U.S. policy toward Burma is unlikely to change under current conditions. The Burmese government commitment to controlling opium production and trafficking within its borders is questionable. After decades of conflict with ethnic minority insurgent groups, the government has signed a number of cease-fire agreements with them that, according to the State Department, have prevented the implementation of any meaningful drug enforcement operations in areas under the control of ethnic armies, thus furthering opium production and heroin trafficking. For example, in 1989, the government concluded a cease-fire agreement with the United Wa State Army (UWSA) in which the UWSA agreed to end its armed insurgency and the government permitted the Wa people to have autonomous control of their territory. Since the government ended its attempt to establish its authority over Wa territory, the Wa have gained control of 80 percent of the opium cultivation areas in Burma and UWSA has become one of the world’s leading trafficking organizations. Other minority groups in opium poppy cultivation areas have reached similar agreements with the Burmese government. Also, in January 1996, the Shan United Army (SUA), headed by Khun Sa, a well-known drug lord, ended its armed conflict with the Burmese army. Despite the potential for the government to undertake meaningful counternarcotics efforts in former SUA-controlled territory, there has been little substantive impact on the flow of Burmese heroin. Furthermore, according to U.S. officials, while Khun Sa is under indictment in the United States for heroin-trafficking offenses, the Burmese government has granted him immunity from prosecution from drug-trafficking offenses and has refused U.S. extradition requests. Based on these limitations, U.S. officials told us that they are not optimistic that meaningful changes will take place under the current Burmese military regime. Difficulties in stemming Burmese opium production are compounded by challenges in providing a regional approach to interdicting heroin-trafficking routes. The impact of U.S. regional interdiction efforts to date has been limited by the ability of traffickers to shift their routes into countries with inadequate law enforcement capability and by poor law enforcement cooperation between the United States and China. Although some U.S. programs in countries such as Thailand and Hong Kong that possess the political will and capability to engage in counternarcotics activities have achieved positive results, the problems in Burma have limited the progress in the region. According to DEA, each heroin producing region has separate and distinct distribution methods that are highly dependent on ethnic groups, transportation modes, and surrounding transit countries. From Southeast Asia, heroin is transported to the United States primarily by ethnic Chinese and West African drug-trafficking organizations. These organizations consist of separate producers and a number of independent intermediaries including financiers, brokers, exporters, importers, and distributors. Heroin-trafficking organizations are not vertically integrated, and heroin shipments rarely remain under the control of a single individual or organization as they move from the overseas refinery to U.S. streets. Since responsibility and ownership of a particular drug shipment shifts each time the product changes hands, direct evidence of the relationship among producer, transporter, and wholesale distributor is extremely difficult to obtain. According to DEA officials, these factors combine to make the detection, monitoring, and interdiction of heroin extremely difficult. The impact of U.S. efforts to interdict regional drug-trafficking routes has been limited by the ability of traffickers to shift their routes into countries with inadequate law enforcement capability. (See app. II.) For example, Thailand’s well-developed transportation system formerly made it the traditional transit route for about 80 percent of the heroin moving out of Southeast Asia. However, in response to increased Thai counternarcotics capability and stricter border controls, this amount has declined to an estimated 50 percent in recent years as new drug-trafficking routes have emerged through the southern provinces of China to Taiwan and Hong Kong or through Laos, Cambodia, and Vietnam. Similarly, cooperation between the United States and Hong Kong has helped reduce the use of Hong Kong as a transshipment point for Southeast Asian heroin, but law enforcement weaknesses in China and Taiwan have encouraged drug-traffickers to shift supply routes into these countries. Limited Chinese counternarcotics cooperation with U.S. law enforcement has compounded difficulties in interdicting heroin-trafficking routes in the region. Chinese cooperation has become increasingly important because, as counternarcotics efforts in other countries have achieved positive results, DEA has noted an increase in the use of drug-trafficking routes through China. However, the Chinese government has been reluctant to cooperate with U.S. efforts. For example, cumbersome Chinese requirements have delayed dissemination of counternarcotics intelligence information from DEA to Chinese law enforcement authorities. DEA faces difficulties in undertaking joint investigations with Chinese law enforcement officials and assisting the Chinese in making timely seizures and arrests in China. Further, the Chinese have been unresponsive in providing counternarcotics information that could possibly assist DEA investigations. Furthermore, it is possible that the 1997 transition of Hong Kong from British to Chinese control will further complicate U.S. regional counternarcotics activities. The small DEA presence in Hong Kong is currently responsible for covering counternarcotics activities in Hong Kong, China, Taiwan, and Macau. According to DEA officials, DEA is planning to continue its Hong Kong activities from there but the Chinese government is unlikely to approve regional coverage of Taiwan. In March 1996, we reported that DEA had planned to open a one-agent office in Beijing to expand its regional coverage. Even though DEA officials remain optimistic that an office will eventually be established, to date the Chinese government has refused DEA requests for opening a Beijing office. As a result, DEA’s ability to assist other countries in the region in interdicting heroin-trafficking routes opened through southern China and Taiwan are constrained. In Thailand, we found that sustained U.S. support since the early 1970s and good relations with the Thai government have contributed to abatement of opium production and heroin trafficking. Since 1978, State Department has provided $16.5 million of counternarcotics support that assisted the Thai government in reducing opium production levels from an estimated 150 to 200 metric tons in the 1970s to 25 metric tons in 1995. As a result, Thai traffickers no longer produce significant amounts of heroin for export. Also, law enforcement training programs funded by the State Department and support for Thai counternarcotics institutions provided primarily by DEA have enhanced Thailand’s law enforcement capability. For example, using U.S. assistance, the Thai police captured 10 key members of Burma’s SUA heroin-trafficking organization in November 1994. The United States also provided support to establish a task force in northern Thailand that could foster intelligence analysis and information sharing among Thai counternarcotics police organizations. The United States has also obtained successful counternarcotics cooperation with Hong Kong. For example, the sharing of DEA intelligence with Hong Kong law enforcement authorities has resulted in the seizure of heroin shipments destined for the United States and the capture of major drug traffickers. The U.S. and Hong Kong governments also have worked closely to arrange extraditions of drug traffickers to the United States for trial. Also, a bilateral agreement permits assets seized by the Hong Kong authorities from convicted drug offenders to be shared between Hong Kong and the United States. As of August 1995, Hong Kong had frozen or confiscated approximately $54 million worth of drug traffickers’ assets under a bilateral agreement. Of this amount, the seizure of at least $26 million in assets was based on information that U.S. law enforcement agencies provided. A key element of U.S. heroin control strategy is the increasing reliance on international organizations, such as the United Nations, in countries where the United States faces significant obstacles in providing traditional bilateral counternarcotics assistance. In Burma, the United States has been a major donor for UNDCP drug control projects, providing about $2.5 million from fiscal years 1992 through 1994. However, we found that the projects have not significantly reduced opium production because (1) the scope of the projects has been too small, (2) the Burmese government has not provided sufficient support to ensure project success, and (3) inadequate planning has reduced project effectiveness. For example, UNDCP created “opium-free zones” in specific parts of Wa territory where poppy cultivation was prohibited. However, U.S. officials told us that some farmers simply moved their planting sites to remote sites outside project areas. Also, the Burmese government failed to provide in-kind resources to support UNDCP activities such as civil engineering personnel and basic commodities such as fuel and did not routinely cooperate in granting UNDCP worker access to the project areas. Finally, aerial surveys of project areas designated for crop reduction were not conducted until 18 months after the projects began. As a result, UNDCP had no way to evaluate accurately the effectiveness of supply reduction projects because no baseline data were established at the outset. In our March 1996 report, we stated that, despite these problems, U.S. counternarcotics officials believed that UNDCP projects offered the only alternatives to U.S.-funded opium poppy crop eradication and alternative development programs in Burma. UNDCP had planned to expand its efforts with a new $22 million, 5-year project but, according to State Department officials, the project now has been suspended because of difficulties in obtaining Burmese government support and cooperation, such as refusing UNDCP personnel access and limiting UNDCP communications in some project areas. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed U.S. efforts to control heroin trafficking from Southeast Asia to the United States. GAO noted that: (1) U.S. efforts have achieved some positive results in Thailand and Hong Kong, but not in Burma; (2) the United States has supported United Nations (UN) drug control projects in Burma, but these efforts have met with limited success because the projects' scope has been small, planning has been inadequate, and Burma has not provided sufficient support; (3) the United States suspended direct counternarcotics assistance to Burma because of human rights violations; (4) much of Burma's heroin-producing region is not under government control because of insurgencies headed by drug traffickers; (5) law enforcement efforts against heroin traffickers are impeded by the traffickers' ability to shift transportation routes to countries with inadequate law enforcement capabilities; and (6) U.S. heroin control efforts are also impeded by a lack of cooperation with China on counternarcotics activities. |
DOD has acknowledged that process and system weaknesses impair its ability to account for the full cost of military equipment and that these weaknesses impede its ability to achieve financial statement auditability. DOD is required by various statutes to improve its financial management processes, controls, and systems to ensure that complete, reliable, consistent, and timely information is prepared and responsive to the information needs of agency management and oversight bodies, and to produce annual audited financial statements prepared in accordance with generally accepted accounting principles (GAAP) on the results of its operations and its financial position. Federal accounting standards, which are GAAP for federal government entities, require that the full cost of outputs (e.g., military equipment assets acquired) be reflected on agencies’ financial statements. As stated earlier, full cost is the sum of direct and indirect costs to produce the output. The standards require that the cost of property, plant, and equipment, which includes military equipment, shall include all costs incurred to bring the asset to a form and location suitable for its intended use. Examples of these costs include amounts paid to vendors; labor and other direct or indirect productions costs; and direct costs of inspection, supervision, and administration of construction contracts and construction work. Federal accounting standards allow reporting entities to use reasonable estimates of historical cost to value their property, plant, and equipment while encouraging them to establish adequate controls and systems to reliably capture asset costs in the future. DOD is also required by law to provide, at least annually, Selected Acquisition Reports (SARs) to congressional defense committees on the status of its MDAPs. SARs are the primary means by which DOD reports the status of these programs to Congress. These reports are intended to provide Congress the information needed to perform its oversight functions. In general, SARs contain information on the cost estimates, schedule, and performance of a major acquisition program in comparison with baseline values established at program start. Specific information contained in the SARs includes: program description, including the reasons for any significant changes in the total program cost for development and procurement reported in the previous SAR; schedule milestones; quantity of items to be purchased; procurement unit cost; contractor costs (initial contract price, the current price, and the price at completion); and technical and schedule variances. Congressional reporting through the SAR ceases after 90 percent of the items related to a particular MDAP have been delivered to the government, or after 90 percent of the planned expenditures under the program or subprogram have been made. After the program reaches the 90 percent threshold, the items are no longer categorized as MDAPs and enter what is referred to as the sustainment period in which the cost of the units are categorized as Operations and Support. A program can be redesignated as an MDAP if planned modifications or upgrades to an asset meet the criteria for MDAP designation. Our review of prior reports, studies, and analyses to identify weaknesses in DOD’s operations identified the following seven categories of weaknesses that impaired the department’s ability to account for the cost of military equipment: (1) support for the existence, completeness, and cost of recorded assets is needed; (2) more detail is needed in DOD contracts to allocate costs to contract deliverables; (3) additional guidance is needed to help ensure consistency for asset accounting; (4) monitoring is needed to help ensure compliance with department policies; (5) departmentwide cost accounting requirements need to be defined; (6) departmentwide cost accounting capabilities need to be developed; and (7) systems integration is needed. DOD has begun actions to address these previously reported weaknesses; however, it acknowledges that additional actions are needed before these weaknesses are fully addressed. DOD officials—including the Deputy Director, Financial Improvement and Audit Readiness (FIAR) Directorate, Office of the Under Secretary of Defense (Comptroller), and the Deputy Director of Property and Equipment Policy within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics (AT&L)— stated that the size and complexity of the department’s operations make it difficult to reach consensus on how best to address the weaknesses. They acknowledged that the department is currently focused on verifying the reliability of information, other than cost, recorded in its property accountability systems. These officials told us that until the department fully addresses the weaknesses that prevent it from accurately and completely accounting for the cost of its military equipment, it will continue to rely on a methodology to estimate the cost of its military equipment assets for financial reporting purposes. The availability of timely, reliable, and useful financial information on the full costs associated with acquiring assets is an essential tool that assists both management and Congress in effective decision making such as determining how to allocate resources to programs. It also provides an important monitoring mechanism for evaluating program performance that can help strengthen oversight and accountability. The seven categories of weaknesses and DOD’s actions to address them are as follows. Support for the existence, completeness, and cost of recorded assets is needed. DOD has not maintained the documentation needed to support the existence, completeness, and full cost of its military equipment assets. There were instances in which the department could not (1) trace assets recorded in its property accountability systems to actual physical assets, or (2) locate the records supporting the actual physical assets. Further, for assets included in the accounting system, DOD could not substantiate that all costs (e.g., acquisition, freight, inspection, and modification) had been captured and reported because of the lack of documentation (e.g., invoices). Standards for internal control call for transactions and other significant events to be accurately and timely recorded, as well as clearly documented, with the documentation being readily available for examination. In addition, DOD policy requires that the components maintain all financial records documenting the acquisition of property, plant, and equipment in support of the department’s Records Management Program. The components are also required to establish and maintain the Records Management Program, as well as periodically evaluate compliance. DOD stated that it has three ongoing initiatives to address this weakness—the military equipment valuation (MEV), the Proper Financial Accounting Treatment for Military Equipment (PFAT4ME), and the Wide Area Work Flow (WAWF). As allowed by federal accounting standards, DOD is using its MEV methodology to estimate the historical cost of its military equipment assets. The MEV methodology uses a combination of available data (budgetary and expenditure) to estimate the historical cost of military equipment assets. These estimated values were reported on the department’s fiscal year 2006 through 2009 financial statements. However, the results of several DOD Inspector General (IG) audits and an evaluation by the Under Secretary for AT&L identified implementation issues that impaired the reliability of the derived cost estimates in part, because DOD was unable to provide documentation to substantiate the universe of assets subject to its valuation methodology. For example, both reported that, in some cases, assets were included in the valuation that no longer existed, and assets that existed were improperly excluded from the valuation. To address these concerns, in 2009 DOD initiated efforts—primarily physical inventories—to verify the reliability of information recorded in its property accountability systems. In May 2010, the DOD Comptroller issued guidance for the performance of the physical inventories and internal control testing. This guidance states that the components should verify critical information, such as individual item identifier, category/asset type, location, condition, utilization rate, and user organization. It also identifies the need to perform internal control testing. However, it does not specifically require verification that a unique identifier has been assigned to the asset and recorded in the Item Unique Identification (IUID) registry as required by DOD policy. The guidance also does not provide specific guidance to perform tests of internal controls (e.g., does not identify which controls to test or how to do so). DOD officials, including the Deputy Director, Financial Improvement and Audit Readiness (FIAR) Directorate, Office of the Under Secretary of Defense (Comptroller) agreed with our assessment. The FIAR Deputy Director further stated that it is difficult to provide specifics on the internal control testing to be performed in the above guidance so the department intends to establish a 2-day training course by the summer of 2010 that will provide instruction on how to identify and test controls. DOD plans to complete the verification of the existence and completeness of its military equipment property accountability records in fiscal year 2015. Previously estimated military equipment values reported on its financial statements will be reassessed upon completion of verification efforts at each military department. In addition, the department issued its PFAT4ME policy in June 2006 that requires all contracts be structured at the level of detail needed to provide supporting documentation regarding the cost of individual items delivered. The contract-related documentation (e.g., invoices, and receipt and acceptance documents) received electronically that results from performance of a contract is then to be input into a central repository within the WAWF, which became operational in fiscal year 1999, where it is maintained and available to help support full cost determinations. However, these efforts do not adequately address this weakness because they do not address the lack of supporting documentation for noncontract- related costs such as program management costs incurred. As stated earlier, DOD policy requires components to maintain supporting documentation for the full cost of acquired military equipment assets; however, DOD has not enforced components’ compliance with its record management policy. Because it does not have the needed supporting documentation, the department has to rely on an estimation methodology to derive these assets’ values. More detail is needed in DOD contracts to allocate costs to contract deliverables. DOD had not structured contracts at the level of detail needed to identify and assign costs to individual military equipment assets. Specifically, the contracts were not structured in a manner that facilitated application of the appropriate accounting treatment for costs, including the identification of those costs that should be captured as part of the full cost of a deliverable. Standards for internal control require that the agency identifies, captures, and distributes information at the sufficient level of detail that permits management to carry out its roles and responsibilities. DOD stated that the PFAT4ME and the Item Unique Identification (IUID) initiatives will address this weakness. PFAT4ME requires program managers to structure all contracts entered into after October 2006 in a manner to facilitate the appropriate accounting treatment of contract costs. To implement this initiative, DOD developed a training course on how to comply with the requirements outlined in its PFAT4ME policy. However, it is not a core or required course and DOD has not established a process to ensure that acquisition personnel affected by this policy, including program managers and business/financial management analysts, complete the course. In 2009, AT&L began to perform oversight activities to verify that the components were properly structuring the contracts; however, AT&L officials stated that they were not verifying whether program management offices were appropriately accounting for the cost of each deliverable. In addition, we found that DOD has not developed guidance for these oversight activities, including how often these reviews are to be performed, roles and responsibilities for this oversight, the steps to be performed, and the basis for selecting contracts for review. In addition, DOD policy requires contract deliverables, including military equipment, that meet predefined criteria, be assigned a unique item identifier. According to DOD officials responsible for the IUID initiative, the purpose of the unique item identifier is to facilitate asset accountability and tracking, including the identification and aggregation of related costs to derive the full cost of a contract deliverable. The department expected to fully implement IUID by fiscal year 2015; however, according to DOD officials, the department is not on target for achieving its timeline. These officials told us that the department has encountered difficulty in obtaining consensus from the components in implementing this initiative primarily due to the applicability of the IUID requirement to controlled inventory items. The Deputy for Program Development and Implementation, Defense Procurement and Acquisition Policy within AT&L explained that controlled inventory items—which encompass items such as ammunition and threaded fasteners and number in the hundreds of millions—were never intended to be assigned individual unique item identifiers. The department is currently in the process of clarifying this requirement. DOD has determined that if it does not modify the IUID policy to eliminate this requirement, it will not be able to fully implement IUID until fiscal year 2023. If the IUID requirements are revised to exclude these items, DOD expects to fully implement IUID by 2017. DOD officials acknowledged that they have not yet developed policies and procedures that define how IUID will be used to identify and aggregate asset costs. Additional guidance is needed to help ensure consistency for asset accounting. DOD had not developed a policy and procedures requiring the components to account for the full costs of military equipment assets. Standards for internal control call for agencies to develop and implement appropriate policies, procedures, techniques, and mechanisms to ensure that management’s directives are consistently carried out. DOD stated that the PFAT4ME, IUID, and the MEV methodology will address this weakness. AT&L officials, including the Deputy Director of Property and Equipment Policy, told us that they are working with the Federal Accounting Standards Advisory Board’s Accounting and Auditing Policy Committee (AAPC) to develop full cost guidance. They also noted that AT&L has drafted guidance intended to supplement its PFAT4ME policy memorandum to assist managers in identifying the types of contract costs that should be included in determining the full cost of an asset, such as military equipment. According to these officials, this policy has not been finalized because the department has had difficulty reaching consensus regarding its cost accounting requirements. These officials stated that this draft guidance does not yet address noncontract-related costs, such as program management costs incurred directly by the military services and indirect costs. They did not provide a time frame for completing these efforts. As stated earlier, the department is currently relying on an estimation methodology referred to as MEV to report the cost of its military equipment. In order for management and auditors to rely upon the results of the methodology it is important that the methodology be implemented consistently. To help ensure consistency in the application of its estimating methodology, DOD developed business rules in 2005. In addition to the MEV implementation issues identified by the DOD IG, we identified inconsistencies in the business rules for estimating the cost of military equipment, which further impact the reliability of reported estimates. For example, the MEV full cost business rule states that all costs incurred to acquire and bring military equipment to a form and location for its intended use should be capitalized, including the direct costs of maintaining the program management office. However, the MEV program management office business rule states that program management office costs are immaterial and should be expensed. DOD officials agreed that there are inconsistencies in the business rules and acknowledged the need to revisit them. Monitoring is needed to help ensure compliance with department policies. DOD has not established adequate monitoring controls to assess compliance with applicable policies or the extent to which actions taken are achieving their intended objectives. For example, although DOD property accountability policies and regulations require DOD components to (1) perform periodic physical inventories and to reconcile the results to the associated property accountability records, and (2) track and maintain records for all government-furnished property in the possession of contractors, DOD management has not established needed monitoring controls to help ensure compliance. Standards for internal control require agencies to develop and implement ongoing monitoring activities over the internal control system to ensure adherence with policies and procedures. DOD financial management and AT&L officials, including the Deputy Director of Property and Equipment Policy within AT&L, stated that weaknesses in the department’s ability to ensure compliance with property accountability requirements have impacted its ability to substantiate reported military equipment costs. As a result of the breakdowns in compliance with policies and regulations for recording and tracking property, property records used by the components for valuing its military equipment included assets that no longer existed, and did not include other assets that did exist. To address this concern, DOD is in the process of verifying its property accountability records by conducting physical inventories and internal control testing. As stated earlier, DOD has issued guidance, but it does not provide specifics as to the internal control testing to be performed. The DOD Comptroller told us that the department plans to complete this effort in fiscal year 2015. After completing this effort, effective ongoing monitoring activities are needed to ensure departmentwide compliance with policies designed to help maintain reliable property accountability records. Departmentwide cost accounting requirements need to be defined. DOD has not defined its requirements for the identification and aggregation of cost information, which will be the foundation for its development of departmentwide cost accounting and management capabilities. Federal accounting standards require that the full cost of resources, which directly or indirectly contribute to the production of outputs (e.g., military equipment acquired), be reflected on an agency’s financial statement. To ensure that costs are identified and accumulated in a consistent and comparable manner, entities should define their requirements and procedures for identifying, measuring, analyzing, and reporting costs. Since DOD has stated that it intends to support the identification, aggregation, accounting, and reporting of cost information through the implementation of the Enterprise Resource Planning (ERPs), it is important that DOD define its cost accounting requirements to ensure that these systems provide these capabilities. Institute of Electrical and Electronics Engineers (IEEE) and the Software Engineering Institute at Carnegie Mellon recommend that organizations define their requirements, which are the specifications that system developers and program managers use to develop or acquire, implement, and test a system. This process should identify user requirements, as well as those needed for the definition of the system. It is critical that requirements be carefully defined and that they reflect how the organization’s day-to-day operations are or will be carried out to meet mission needs. Improperly defined or incomplete requirements have been commonly identified as a root cause of system failure and systems that do not meet their cost, schedule, or performance goals. DOD Comptroller and Business Transformation Agency officials stated that the implementation of the ERPs and its Standard Financial Information Structure (SFIS) are intended to address this weakness. Comptroller and Business Transformation Agency and military department financial management and comptroller officials stated that most of the ERPs under development within the military departments have cost accounting management capabilities inherent in their design as required by DOD policy. Although agencies should first define their requirements, which are then used to evaluate the system’s capabilities to determine if it will meet users’ needs before it is developed or acquired, the department has not yet defined its cost accounting requirements at the major component level, including how SFIS will be used to support cost accounting in the existing and ERP system environments. They stated that the department has been unable to reach consensus on how to implement SFIS in support of cost accounting and management. SFIS is intended to be a comprehensive “common business language” that will standardize the financial reporting of information and data for budgeting, financial accounting, and cost/performance management. DOD has not yet determined how the SFIS data elements will be used to identify and aggregate cost information, nor has it established time frames for developing the cost accounting requirements and completing SFIS. Departmentwide cost accounting capabilities need to be developed. DOD had not developed departmentwide cost accounting capabilities to capture military equipment asset costs. Federal accounting standards require agencies to develop and implement cost accounting systems that provide the capability to collect cost information by responsibility segments, measure the full cost of outputs, provide information for performance measurement, integrate cost accounting and general financial accounting, provide appropriate and precise information, and accommodate special cost-management needs. DOD’s legacy financial management and related business systems were not designed to meet current financial reporting requirements and do not provide adequate evidence for supporting material amounts on the financial statements or acquisition management decision making. These systems were designed to record and report information on the status of appropriations and support funds management, and not designed to collect and record financial information in compliance with federal accounting standards. DOD acknowledged that it does not yet have the capability to identify, aggregate, and capture the full costs of its military equipment and has stated that the ERPs are intended to provide this capability. We have previously reported on problems that DOD has encountered in its efforts to implement ERPs. In 2007, we reported that the Army lacked an integrated approach for implementing its ERPs, which could result in interoperability problems. In September 2008, the Army reported a similar finding. Specifically, the Army reported that interoperability problems were likely to occur due to the lack of common data definitions and structures between the Army’s ERPs—General Fund Enterprise Business System (GFEBS), Global Combat Support System-Army (GCSS- Army), and Logistics Modernization Program (LMP)—thus resulting in the need for manual reconciliations and reduced efficiencies. The report concluded that the planned configuration of these systems may prevent the Army from receiving the intended benefits of an ERP, including financial transparency and cost accounting. Army officials stated that they are addressing these deficiencies, but did not provide a time line for completion. In July 2009, Navy reported that its ERP did not yet provide the capability to aggregate cost information to derive the full cost of its military equipment and to segregate military equipment from other general property, plant, and equipment. The Navy Financial Management Officer stated that these deficiencies have not yet been addressed because of other priorities. DOD stated that ERPs are critical to transforming business operations within the military departments. Systems integration is needed. DOD had not fully integrated its property and logistics systems with acquisition and financial systems. DOD policy requires that its financial management systems are planned for and managed together, operated in an integrated fashion, and linked together electronically in an efficient and effective manner to provide reliable, timely, and accurate financial management information. The department’s property and logistics systems were not designed to capture acquisition costs and the cost of modifications and upgrades, or to calculate depreciation. Many of the financial management systems in use are not fully integrated with other systems within the military components or departmentwide. The number of system interfaces and subsidiary and feeder systems, and the lack of standard data elements employed by each DOD component, make it difficult to cross-walk data between systems, share data, and ensure consistency and comparability of data. In March 2009, DOD reported that its legacy system environment does not facilitate the identification and aggregation of the full cost of its assets. DOD officials, including the Deputy Director of Property and Equipment Policy, AT&L, stated that the implementation of the ERPs and SFIS is intended to address this weakness. To facilitate information sharing for financial reporting purposes, in August 2005 DOD issued a policy requiring systems, including ERPs, that contain financial information to provide the ability to capture and transmit information following the SFIS data structure or, if not, to demonstrate that this capability will be achieved through a cross- walk to the SFIS data structure. DOD components and agencies are required to report to the Business Transformation Agency (BTA) the extent to which SFIS requirements, as defined in the department’s business enterprise architecture, are met. BTA officials, including the official responsible for the SFIS initiative, stated that the department is developing a process to validate the information included in the SFIS compliance reports submitted by the components and agencies but did not provide a time frame for completion. However, if certain SFIS requirements, such as cost accounting, are not clearly defined, including a determination of how cost information should be identified, aggregated, and managed within and across acquisition programs, the department’s intent to achieve standardization and comparability of cost information will be at risk. Further, as stated above, the Army’s ERPs—GFEBS, GCSS- Army, and LMP—may experience interoperability problems because of the lack of common data definitions and structures. In addition, DOD stated that it has not yet determined whether or how WAWF and the IUID will be integrated into the emerging ERP environment to facilitate the identification and aggregation of cost to address the agency’s requirements. While DOD is relying on a methodology to estimate the cost of its military equipment, the department has various actions underway to begin laying a foundation for addressing weaknesses that currently impair its ability to identify, aggregate, and account for the full cost of its military equipment assets. For example, DOD has taken important steps such as requiring greater detail in contract-related documentation, such as invoices, and the assignment of unique identifiers to individual items to aid its ability to identify, aggregate, and account for the cost of acquired assets. An additional challenge that DOD faces is establishing the universe of assets subject to valuation and cost accounting. Previous audits and evaluations have showed that some assets that no longer existed were included while other existing assets were improperly excluded from DOD’s property accountability records. This situation exists due to a combination of issues, including gaps in DOD’s guidance and policies related to asset accountability, as well as a lack of compliance with existing policies and guidance. These examples illustrate the interconnection or dependency between the various asset accounting issues the department is facing and its related actions to improve its cost accounting financial management for military equipment. DOD has acknowledged that additional actions are needed before the department achieves cost accounting and management capabilities, but stated that its improvement efforts are not yet focused on achieving these capabilities. Additional efforts are needed to issue additional guidance regarding how to identify the full cost of an asset to supplement its PFAT4ME guidance and to identify and define departmentwide cost accounting requirements at the major component level, including what information is needed to manage cost within and across acquisition programs and support asset valuation and life-cycle management and how implementation of SFIS and the ERPs will support these requirements. Moreover, DOD needs to determine the extent to which certain actions currently underway, such as WAWF and IUID, will be utilized in the emerging ERP environment. Without additional actions and guidance, the department’s current efforts are at risk of not meeting the intended objectives of providing cost accounting capabilities needed to reliably account for and report the full cost of its military equipment. In order to enhance corrective actions underway within DOD to address previously reported weaknesses and improve DOD’s ability to provide reliable information on the full cost of military equipment acquired through MDAPs, we recommend that the Secretary of Defense direct the DOD Chief Management Officer to work jointly with the Under Secretary of Defense (Comptroller); the Under Secretary of Defense for Acquisition, Technology, and Logistics; and the military department Chief Management Officers, as appropriate, to take the following nine actions: Enforce compliance with the department’s records management policy by periodically evaluating the extent to which the components are maintaining documentation in support of the full cost of military equipment. Establish and implement ongoing monitoring activities to enforce compliance with the department’s existing policies and procedures requiring the components to (1) perform periodic physical inventories and to reconcile the results to property accountability records after completion of existing efforts to verify the reliability of the property accountability records and (2) track and maintain records for government-furnished property in the possession of contractors. Update the department’s guidance regarding verification of information in component property accountability records to include verification that all assets recorded in the accountability records that are required by DOD to have a Unique Item Identifier are included in its IUID registry. Develop and implement guidance on how the IUID will be used to identify, aggregate, and report asset cost information. Classify the PFAT4ME training as a core course for the department’s affected acquisition personnel, including program managers, and track attendance to ensure that such personnel take the training. Develop and implement guidance to help ensure compliance with the oversight activities for the PFAT4ME initiative, including how often these reviews are to be performed, roles and responsibilities for oversight, the steps to be performed, and the basis for selecting contracts for review. Complete efforts to develop and implement a policy requiring the components to account for the full cost of military equipment, including guidance for what types of contract and other costs should be included and for determining the appropriate accounting treatment of these costs. Review the MEV methodology business rules to identify inconsistencies and revise the rules as needed. Assess the WAWF and IUID initiatives and determine the extent to which they will be utilized in the emerging ERP business systems environment. Additionally, we recommend that the Secretary of Defense direct the military department Chief Management Officers, in consultation with the Under Secretary of Defense (Comptroller) and the Under Secretary of Defense for Acquisition, Technology, and Logistics, as appropriate, take the following two actions: Define the cost accounting requirements at the major component level, including how SFIS data elements will be used to identify, aggregate, account for, and report cost information. After defining the cost accounting requirements, utilize the requirements as input to the ERPs to help ensure that the ERPs will provide the capability to identify and aggregate cost information for the department’s assets in accordance with DOD’s defined requirements. We received written comments on a draft of this report from the Under Secretary of Defense (Comptroller) which are reprinted in Appendix II. In commenting on the report, the Under Secretary stated the department agreed with the need to establish a framework that provides improved cost and management information that will support better management of Major Defense Acquisitions Programs (MDAP). The department concurred with the 11 recommendations and cited actions taken, under way, or planned to address them. In its response, the department emphasizes that it is sensitive to the cost of obtaining information solely for the purpose of proprietary financial reporting or audit compliance where this information is not otherwise used by management. It further states that DOD has concluded that it is not cost-effective to gather auditable data on the historical cost of military equipment systems for proprietary financial reporting and audit because the information is not used to manage. DOD has indicated that it will propose changes in department policies and instructions to accommodate this decision. These pending policy changes will likely impact DOD’s implementation of our recommendations and so at some point we may need to assess DOD’s corrective actions under the changed policies to determine whether the actions meet the intent of our recommendations. DOD acknowledges that there may be requirements for cost information related to acquisition-program lifecycle management, which the department will accommodate as appropriate. DOD also stated that it is working with federal standard setters to develop full-cost guidance that would guide its cost accounting efforts. The department will integrate this guidance into the ERPs and guide cost accounting efforts and will develop, coordinate, and issue policy and guidance on accounting for the full cost of military equipment consistent with our recommendations. We welcome DOD’s decision to accommodate such requirements and contribute to revised guidance for cost-effectively serving management’s information needs and reliable reporting on the cost of acquisition programs and assets acquired. It is also important to note that while federal accounting standards do not require agencies to collect historical, transaction-based cost data, they encourage agencies that estimate asset value, such as DOD, to establish the internal control practices and systems needed to capture and sustain such data for future acquisitions. We believe that this guidance reflects the importance of actual costs in providing reliable historical information for accountability to the American taxpayer and for management decision making as well. It is important to emphasize that our recommendations are focused not on gathering costs retrospectively but are intended to assist DOD in its efforts to develop the processes and systems needed to produce reliable information going forward. We believe that providing reliable information is likely to include capturing transaction-based costs as historical information for future management decisions and accountability reporting. The availability of timely, reliable, and useful financial information on the costs associated with acquiring assets is an essential tool that assists both management and Congress in effective decision making such as determining how to allocate resources to programs. It also provides an important monitoring mechanism for evaluating program performance that can help strengthen oversight and accountability. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Deputy Secretary of Defense/Chief Management Officer; the Under Secretary of Defense (Comptroller)/Chief Financial Officer; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Under Secretary of the Army/Chief Management Officer; the Under Secretary of the Navy/Chief Management Officer; the Under Secretary of the Air Force/Chief Management Officer; and the Office of Management and Budget’s Office of Federal Financial Management. This report is available at no charge on GAO’s Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-9095 or khana@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our objective was to identify previously reported weaknesses that impair the Department of Defense’s (DOD) ability to provide reliable cost information for military equipment acquired through major defense acquisition programs (MDAPs) and determine what actions DOD has taken to address them. To address this objective, we ob including the military equipment (i.e., weapon systems) assets acquired through such programs, by reviewing DOD guidance and interviewing officials from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. We identified and reviewed applicable federal financial accounting standards, and interviewed officials of the Federal Accounting Standards Advisory Board to obtain clarification on the changes made to Statement of Federal Financial Accounting Standards (SFFAS) 35. We searched databases of audit reports issued during calendar years 2005 through 2009 using key terms (e.g., military equipment; general property, plant, and equipment; financial management; tained an understanding of MDAPs, weapons systems acquisition; and major defense acquisition programs). We reviewed the results of our search (e.g., reports, studies, and ana to identify weaknesses in business operations that, based on relevan federal financial accounting standards, impair DOD’s ability to account the cost of military equipment. We grouped these weaknesses into categories. To identify additional reports or relevant DOD studies and analyses and to obtain clarification, as needed, on reported weaknesses, we interviewed key department officials, including the following: Deputy Director, Financial Improvement and Audit Readiness Directorate, Office of the Under Secretary of Defense (Comptroller); Acting Deputy Director, Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics; representatives from the DOD’s Inspector General Office; representatives from the military services’ offices of the Assistan Secretary, Financial Management and Comptroller, Financial Management Operations; and Chief Management Office representatives within DOD and the military services as required by section 304(b). See appendix III for the reports, studies, and analyses reviewed to identify the relevant weaknesses. We discussed with DOD officials the categories of weaknesses we identified as a result of our search of prior reports, studies, and analyses, and obtained supporting documentation—such as memorandums, directives, an independent validation and verification report for the military equipment valuation initiative, and gap analyses related to the Navy Enterprise Resource Planning effort—from DOD on its actions to address them. Using applicable criteria, we assessed whether the actions taken adequately addressed the identified weaknesses. We interviewed th DOD officials referred to above to obtain clarification and explanation of e actions taken to address the weaknesses, including mechanisms and metrics used to monitor progress. We conducted this performance audit from October 2009 through July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We be evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Financial Management: Achieving Financial Statement Auditability in the Department of Defense. GAO-09-373. Washington, D.C.: May 6, 2009. DOD’s High-Risk Areas: Actions Needed to Reduce Vulnerabilities and Improve Business Outcomes. GAO-09-460T. Washington, D.C.: March 12, 2009. Defense Business Transformation: Status of Department of Defense Efforts to Develop a Management Approach to Guide Business Transformation. GAO-09-272R. Washington, D.C.: January 9, 2009. DOD Business Transformation: Air Force’s Current Approach Increases Risk That Asset Visibility Goals and Transformation Priorities Will Not Be Achieved. GAO-08-866. Washington, D.C.: August 8, 2008. Fiscal Year 2007 U.S. Government Financial Statements: Sustained Improvement in Financial Management Is Crucial to Improving Accountability and Addressing the Long-Term Fiscal Challenge. GAO-08-926T. Washington, D.C.: June 26, 2008. Defense Business Transformation: Sustaining Progress Requires Continuity of Leadership and an Integrated Approach. GAO-08-462T. Washington, D.C.: February 7, 2008. Defense Business Transformation: A Full-time Chief Management Officer with a Term Appointment Is Needed at DOD to Maintain Continuity of Effort and Achieve Sustainable Success. GAO-08-132T. October 16, 2007. Defense Business Transformation: Achieving Success Requires a Chief Management Officer to Provide Focus and Sustained Leadership. GAO-07-1072. Washington, D.C.: September. 5, 2007. Financial Management: Long-standing Financial Systems Weaknesses Present a Formidable Challenge. GAO-07-914. Washington, D.C.: August 3, 2007. DOD’s High-Risk Areas: Efforts to Improve Supply Chain Can Be Enhanced by Linkage to Outcomes, Progress in Transforming Business Operations, and Reexamination of Logistics Governance and Strategy. GAO-07-1064T. Washington, D.C.: July 10, 2007. Defense Business Transformation: A Comprehensive Plan, Integrated Efforts, and Sustained Leadership Are Needed to Assure Success. GAO-07-229T. Washington, D.C.: November 16, 2006. Department of Defense: Sustained Leadership Is Critical to Effective Financial and Business Management Transformation. GAO-06-1006T. Washington, D.C.: August 3, 2006. Department of Defense, Office of Inspector General Independent Auditor’s Report on the DOD Agency-Wide FY 2009 2008 Basic Financial Statements. D-2010-016. Arlington, Va.: Nove 12, 2009. Independent Auditor’s Report on the Department of the Navy General Fund FY 2009 and FY 2008 Basic Financial Statements. D-2010-014. Arlington, Va.: November 8, 2009. Independent Auditor’s Report on the Department of the Navy Work Capital Fund FY 2009 and FY 2008 Basic Financial Statements. D- 012. Arlington, Va.: November 8, 2009. Independent Auditor’s Report on the Army General Fund FY 2009 and FY 2008 Basic Financial Statements. D-2010-010. Arlington, Va.: November 8, 2009. Independent Auditor’s Report on the Army Working Capital Fund F 2009 and FY 2008 Basic Financial Statements. D-2010-009. Arling Va.: November 8, 2009. Independent Auditor’s Report on the Air Force Working Capital Fund FY 2009 and FY 2008 Basic Financial Statements. D-2010-008. Arlington, Va.: November 8, 2009. Independent Auditor’s Report on the Air Force General Fund FY 2009 and FY 2008 Basic Financial Statements. D-2010-006. Arlington, Va.: November 8, 2009. Internal Controls over Government Property in the Possession of Contractors at Two Army Locations. D-2009-089. Arlington Va.: Jun 2009. e 18, Independent Auditor’s Report on the Department of Defense FY 2008 an FY 2007 Basic Financial Statements. D-2009-021. Arlington, Va.: November 12, 2008. Independent Auditor’s Report on the Army Working Capital Fund FY 2008 and FY 2007 Basic Financial Statements. D-2009-020. Arlington, Va.: November 8, 2008. Independent Auditor’s Report on the Army General Fund FY 2008 FY 2007 Basic Financial Statements. D-2009-018. Arlington, Va.: November 8, 2008. Report of Marine Corps Funds. D-2007-122. Arlington, Va.: September 11, 2007. Internal Controls Over Military Equipment Vendor Pay Disbursement Cycle, Air Force General Fund: Financial Accounting. D-2007-059. Arlington, Va.: February 8, 2007. Financial Management: Contracts Classified as Unreconcilable by the Defense Finance and Accounting Service Columbus. D-2005-040. Arlington, Va.: March 14, 2005. Implementation of a Cost-Accounting System for Visibility of Wea Systems Life-Cycle Costs. D-2001-164. Arlington, Va.: August 1, 2001. pon Management of Special Tooling and Special Test Equipment at Naval Air Systems Command. N2009-0026. Washington, D.C.: April 24, 2009. Logistics Modernization Program System Federal Financial Management Improvement Act of 1996 Compliance – First Deployment Functionality. A-2007-0205-FFM. Alexandria, Va.: September 7, 2007. Management of Government-Furnished Property, Fort Hood, Texas. A- 2005-0126-FFE. Alexandria, Va: March 4, 2005. Comprehensive Engine Management System Controls. F2005-0004-FB200 Washington, D.C.: April 27, 2005. 0. Agency Financial Report for Fiscal Year 2009. Washington, D.C.: November 16, 2009. Department of Defense Fiscal Year 2009 Report on Reliability. Washington, D.C.: September 30, 2009. Financial Improvement and Audit Readiness Plan (FIAR Plan). March 30, 2009. Department of Defense, Office of the Secretary of Defense. Linking Financial Data to Contract Documents. Washington, D.C.: March 18, 2009. Fiscal Year 2008 Agency Financial Report. Washington, D.C.: Novembe 17, 2008. U.S. Department of Defense 2008 Enterprise Transition Plan (ETP). Washington, D.C.: September 30, 2008. Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. Annual Statement Required under the Federal Managers’ Financial Integrity Act (FMFIA) of 1982. Washington, D.C.: June 27, 2008. Office of the Under Secretary of Defense (Comptroller). Summary of Business Rule for Group or Composite Depreciation. October 24, 2006. Office of the Under Secretary of Defense (Comptroller). Summary of Business Rule with Corresponding Analysis on the Department of Defense Military Equipment Capitalization Threshold. October 24, 2006. Office of the Under Secretary of Defense (Comptroller). Summary of Business Rule for Recording Program Management Office (PMO) Costs October 24, 2006. . Office of the Under Secreta Logistics, Property and Equipment Policy. Internal Validation and Verification Project: Military Equipment Valuation Property and Equipment Policy. June 13, 2006. ry of Defense for Acquisition, Technology, and Office of the Under Secretary o Business Rules for Accounting for and Reporting of Military Equipment: Componentization. June 8, 2005. f Defense (Comptroller). Summary of Transforming Department of Defense Financial Management: A Strategy for Change. Washington, D.C.: April 13, 2001. U.S. Army Statement. Washington, D.C.: November 2009. . Fiscal Year 2009 United States Army Annual Financial Secretary of the Army to the Secretary of Defense. Fiscal Year (FY) 20 Statement of Assurance on Internal Controls as Required Under the Federal Managers’ Financial Integrity Act of 1982. Washington, D.C.: August 17, 2009. U.S. Army. Fiscal Year 2008 United States Army Annual Financial Statement. Washington, D.C.: November 2008. U.S. Army Program Executive Office Enterprise Information Systems. U.S. Army ERP Phase III Analysis Brief. Fort Belvoir, Va.: September 26, 2008. Secretary of the Army to the Secretary of Defense. Fiscal Year (FY) 20 Statement of Assurance on Internal Controls as Required Under the Federal Managers’ Financial Integrity Act of 1982. Washington, D.C.: August 13, 2008. Fiscal Year 2009 Department of the Navy Annual Financial Report Washington D.C.: October 2009. . Under Secretary of the Navy to the Secretary of Defense. Annual Statement Required Under the Federal Managers’ Financial Integrity Act (FMFIA). Washington, D.C.: August 25, 2009. Science Applications International Corporation (SAIC)/Eagan, McAllister, Associates, Inc. (EMA) for the Department of the Navy Financial Improvement Program, Naval Air Systems Command. Gap Analysis peration Materials and Supplies, Study: NAVAIR Management of O General Equipment, and Military Equipment in Navy ERP, Lexington Park, MD: July 31, 2009. Department of the Navy Annual F Washington D.C.: October 2008. inancial Report Fiscal Year 2008. Secretary of the Navy to the Secretary of Defense. Annu Required Under the Federal Managers’ Financial Integrity Act (FMFIA). Washington, D.C.: August 28, 2008. Department of the Air Force Acquire to Retire Process: OMB Circular A-123 Appendix A, Part II Internal Controls over Financial Reporting. December 18, 2009. United States Air Force Annual Financial Statement 2009. Washington D.C.: November 2009. United States Air Force Annual Financial Statement 2008. Washing D.C.: November 2008. Acting Secretary of the Air Force to the Secretary of Defense. Annual Statement Required Under the Federal Managers’ Financial Integrity A (FMFIA) of 1982. Washington, D.C.: August 26, 2008. Secretary of the Air Force to the Secretary of Defense. Annual Statement Required under the Federal Managers’ Financial Integrity Act. Washington, D.C.: August 24, 2009. In addition to the contact person named above, key contributors to this reports were Evelyn Logue, Assistant Director; Vanessa Estevez; Maxine Hattery; John Lopez; Chris Martin; Heather Rasmussen; Darby Smith; and Omar Torres. | Major defense acquisition programs (MDAP) are used to acquire, modernize, or extend the service life of the Department of Defense's (DOD) most expensive assets, primarily military equipment. The Weapon Systems Acquisition Reform Act of 2009 (P.L. 111-23), section 304(b), directed us to perform a review of weaknesses in DOD's operations that affect the reliability of financial information for assets acquired through MDAP. To do so, GAO identified and reviewed previously reported weaknesses that impair DOD's ability to provide reliable cost information for military equipment acquired through MDAPs, and determined what actions DOD has taken to address them. GAO searched databases of audit reports issued during calendar years 2005 through 2009 to identify previously reported weaknesses. Using applicable criteria, GAO assessed whether the actions taken by DOD adequately addressed these weaknesses. GAO found that weaknesses that impaired the department's ability to identify, aggregate, and account for the full cost of military equipment it acquires comprised seven major categories. Specifically, DOD had not (1) maintained support for the existence, completeness, and cost of recorded assets; (2) structured its contracts at the level of detail needed to allocate costs to contract deliverables; (3) provided guidance to help ensure consistency for asset accounting; (4) implemented monitoring controls to help ensure compliance with department policies; (5) defined departmentwide cost accounting requirements; (6) developed departmentwide cost accounting capabilities; and (7) integrated its systems. Although the department has acknowledged that it is primarily focused on verifying the reliability of information, other than cost, recorded in its property accountability systems, DOD has begun actions to address these weaknesses and improve its capability to identify, aggregate, and account for the full cost of its military equipment. For example, DOD is requiring that acquisition contracts be structured in a manner that facilitates application of the appropriate accounting treatment for contract costs, including the identification of costs that should be captured as part of the full cost of a deliverable. In addition, it has also begun to require that all contract deliverables that meet defined criteria be assigned a unique item identifier to facilitate asset tracking and aggregation of costs, and that electronic contract-related documentation, such as the invoice and receipt/acceptance documents, be maintained in a central data repository to ensure the availability of supporting documentation. Moreover, the department has begun to identify cost accounting data elements within its Standard Financial Information Structure (SFIS) and requires that its business-related Enterprise Resource Planning (ERP) systems support this structure. These efforts are intended to improve data sharing and integration between business areas. DOD acknowledged that the actions taken to date do not yet provide the department with the capabilities it needs to identify, aggregate, and account for the full cost of its military equipment. For example, DOD has begun to develop ERPs but has not yet defined the cost accounting requirements to be used to evaluate if these ERPs will provide the functionality needed to support cost accounting and management. DOD stated that additional actions, sustained management focus, and the involvement of many functional groups across DOD are needed before weaknesses that impair its ability to account for the full cost of the military equipment it acquires are addressed. Until DOD defines its cost accounting requirements and completes the other actions it has taken (e.g., defining data elements in SFIS) to support cost accounting and management, DOD is at risk of not meeting its financial management objective to report the full cost of its military equipment. DOD has stated that until these actions are completed it will continue to rely on its military equipment valuation (MEV) methodology to estimate the cost of its military equipment for financial reporting purposes. GAO is making 11 recommendations intended to strengthen actions DOD has taken to begin improving its ability to identify, aggregate, and account for the cost of military equipment acquired through MDAPs. Specifically, our recommendations focused on the need to define departmentwide cost accounting requirements and develop the process and system capabilities needed to support cost accounting and management. DOD concurred with our recommendations. |
After the terrorist attacks of September 11, 2001, the President signed the Aviation and Transportation Security Act (ATSA) into law on November 19, 2001, with the primary goal of strengthening the security of the nation’s aviation system. ATSA created TSA as the agency responsible for securing all modes of transportation, including aviation. The President also issued the National Strategy for Homeland Security in July 2002. The National Strategy for Homeland Security sets forth a plan to strengthen homeland security through the cooperation of federal, state, local, and private-sector organizations in various areas. The National Strategy for Homeland Security aligns and focuses homeland security functions into six critical mission areas: (1) intelligence and warning, (2) border and transportation security, (3) domestic counterterrorism, (4) protecting critical infrastructures and key assets, (5) defending against catastrophic threats, and (6) emergency preparedness and response. A theme of the national strategy is that homeland security is a shared responsibility among these stakeholders, not solely the responsibility of the federal government. In the case of flight and cabin crew member security training, air carriers and TSA both play an important role. Air carriers are responsible for developing and delivering security training programs for their crew members. TSA (and previously FAA) is responsible for developing the guidance and standards that air carriers are to use to design and deliver their security training and for monitoring air carriers’ flight and cabin crew member security training programs for compliance with the guidance and standards. If TSA finds an air carrier to be noncompliant with developing and conducting the required flight and cabin crew member security training, TSA has a range of actions it can take, including imposing fines, and in extreme circumstances, force the air carrier to shut down its operations. The Bureau of Transportation Statistics reported that 105 domestic passenger air carriers were operating in the United States in 2004. Of the 105 air carriers, 12 (11 percent) are major air carriers that carried over 76 percent of the passengers in 2004. With a few exceptions for small aircraft, every commercial flight in the United States has at least two flight crew members and one cabin crew member onboard. These crew members are viewed as the last line of defense in what TSA describes as its layered security system, which includes perimeter security (e.g., airport security fencing), 100 percent passenger and checked baggage screening, hardened flight deck doors, armed federal air marshals, and armed pilots. Figure 1 provides the number of domestic air carriers by carrier group (major, national, and regional) and the percentage of passengers flown domestically by carrier group during fiscal year 2004. Federal guidance for air carriers to use to develop their flight and cabin crew security training programs has been in place for over 20 years. FAA developed the crew member security training guidance, referred to as Common Strategy I, in the early 1980’s in response to numerous hijacking incidents in the late 1970’s. Common Strategy I generally instructed air carriers to develop training programs that called for flight and cabin crew members to cooperate with threatening passengers or hijackers and slow compliance with their demands. Based on this guidance, FAA also developed corresponding security training standards that set forth the requirements for flight and cabin crew member security training. Air carriers were required to incorporate the guidance and standards into their security training programs. FAA principal security inspectors and principal operations inspectors were responsible for monitoring air carriers’ compliance with the security training standards. The nature of the terrorist attacks on September 11, 2001, however, demonstrated that the philosophy of Common Strategy I—to cooperate with hijackers—was flawed as it presumed that hijackers would not use aircraft as weapons of mass destruction. Following the events of September 11, 2001, section 107 of ATSA required FAA, in consultation with TSA and other stakeholders, to develop detailed guidance for flight and cabin crew security training programs within 60 days after the enactment of the act. FAA developed and issued security training guidance, in accordance with the requirements of ATSA, on January 19, 2002. In February 2002, TSA assumed responsibility for monitoring air carriers’ security training for United States passenger air carriers and the air carrier security inspections function was transferred from FAA to TSA. Following the enactment of ATSA, the President signed into law two acts that amended the flight and cabin crew training requirements codified at title 49 of the U.S. Code, section 44918—the Homeland Security Act of 2002 and Vision 100. The Homeland Security Act, enacted on November 25, 2002, amended the law by, among other things, mandating that, if TSA updated training guidance, it must issue a rule to include elements of self- defense in the training programs. Vision 100, subsequently enacted on December 12, 2003, amended the flight and cabin crew security training law in its entirety to require that air carriers providing scheduled passenger air transportation carry out a training program that addresses the 10 elements listed in table 1; TSA approve the air carrier’s training programs; TSA, in consultation with FAA, monitor air carrier training programs and periodically review an air carrier’s training program to ensure the program is adequately preparing crew members for potential threat conditions; TSA, in consultation with FAA, order air carriers to modify training programs to reflect new or different security threats; and TSA develop and provide an advanced voluntary self-defense training program to provide both classroom and effective hands-on training in, at least, the six training elements listed in table 2. Table 1 lists the minimum training elements required by law, as enacted by ATSA and as amended by Vision 100, for basic crew member security training. Table 2 lists the training elements that TSA must include in an advanced voluntary self-defense training program for flight and cabin crew members under the law, as amended by Vision 100. Over the years, our work on best practices in training has found that generally high-performing organizations follow certain key steps in developing and measuring the effectiveness of training programs. These steps include planning—developing a strategic approach that establishes priorities and leverages investments in training to achieve agency results and identify the competencies—commonly referred to as knowledge, skills, abilities, and behaviors—needed to achieve organizational missions and goals, and measure the extent to which their employees possess these competencies; design and development—identifying specific initiatives that the agency will use, along with other strategies, to include individual and organizational performance; implementation—ensuring effective and efficient delivery of training opportunities in an environment that supports learning and change; and evaluation—assessing the extent to which training efforts contribute to improved performance and results. Building on the legislatively mandated guidance developed by FAA and the corresponding standards, TSA enhanced crew member security training guidance and standards with input from stakeholders in accordance with the law, as amended by Vision 100. TSA policy and training officials stated that they revised the guidance and standards for two main reasons. First, the law required that air carriers include additional training elements in their basic crew member security training programs to prepare flight and cabin crew members for potential threat conditions. Second, TSA determined that the guidance and standards needed to be better organized and to more clearly define security training elements, in part due to feedback from air carriers, flight and cabin crew member labor organizations, and associations representing air carriers. For example, stakeholders we interviewed and our own review found that the organization of the previous security training standards was difficult to follow in that several requirements were addressed in multiple sections of the document rather than focused in a single section. During the summer of 2003 and May 2004, TSA established two internal working groups comprised of representatives of its policy, training, regulatory, and/or legal offices. One working group was responsible for revising the security training guidance, and the other working group was responsible for revising the corresponding security training standards—the standards from which air carriers must train their flight and cabin crew members. TSA officials stated that these working groups determined the reasonableness and appropriateness of the security training elements contained in the existing guidance and standards in place at that time and what additional training elements were needed. During the development of the revised guidance and standards, TSA provided external stakeholders with two opportunities to provide comments. In July 2004, the first comment period, TSA convened a meeting of external stakeholders to present an overview of the draft revised guidance and standards and to provide copies of the documents for their review and comment. TSA initially requested that stakeholders provide comments on the draft revised guidance and standards within 2 weeks. However, in response to stakeholder concerns about the short comment period, TSA extended the comment period for an additional 2 weeks. After consolidating all stakeholder comments, TSA’s internal working group reviewed the comments to determine which to incorporate in the guidance and standards. In August 2004, the second comment period, TSA convened additional meetings with external stakeholders— one meeting with air carrier associations and another with crew member labor organizations—to review each of the stakeholders’ comments and to discuss changes made to the revised guidance and standards in response to these comments. In September 2004, TSA provided the stakeholders with a 30-day comment period on the revised guidance and standards. After receiving comments and determining which of the suggested changes to include in the revised guidance and standards, TSA issued the finalized guidance and standards to air carriers on January 3, 2005. Stakeholders we interviewed and our own analysis of revisions made to the guidance and standards generally found the revised guidance and standards to be better organized and to provide some additional clarity on security training requirements for crew members. For example, we found that the previous standards only implicitly addressed the requirement for training on the psychology of terrorists and addressed it in multiple sections in the document. In contrast, the revised standards organized information on this requirement in a single section and clearly identified the requirement as “psychology of terrorists.” Additionally, the previous guidance did not define what constitutes life-threatening behavior, whereas the revised guidance provides both a definition of this behavior and examples. Although TSA made these enhancements, stakeholders we interviewed and stakeholders identified by TSA provided concerns about the reasonableness of applying parts of the guidance and standards to both flight and cabin crew members, the difficulty in implementing some of the standards without additional information or training tools from TSA, and the vagueness of some of the guidance and standards. Our interviews with officials from 19 air carriers and 8 representatives from aviation associations and crew member labor organizations, after the revised guidance and standards were finalized in January 2005, also identified similar concerns. Regarding the applicability of the standards to flight and cabin crew members, officials from 9 of the 19 air carriers that we interviewed stated that some of the training standards remained generalized to both pilots and flight attendants, rather than targeted to their specific job functions in responding to a security threat. For example, TSA requires both pilots and flight attendants to have annual hands-on training on how to use restraining devices. However, 2 of the 19 air carriers we interviewed stated that training pilots annually on how to use restraining devices is not necessary because pilots are trained to stay inside the flight deck at all times, even when an incident occurs in the aircraft cabin. TSA officials stated that all crew members need annual hands-on training on how to use restraining devices because off-duty flight crew members frequently fly, and if an incident occurs in the aircraft cabin, they will know how to use the devices. One crew member labor organization agreed with TSA’s position, stating that incidents could occur in which pilots may need to apply the restraints. Additionally, the crew member labor organization official stated since pilots in command are the security coordinators on flights, they must be familiar with the strategies, tactics, and techniques that flight attendants may use in defense of themselves, the passengers, and the aircraft. Additionally, some stakeholders expressed concerns about the difficulty in implementing some of the standards without additional information or training tools from TSA. For example, officials from 12 of the 19 air carriers we interviewed stated that TSA had not provided sufficient training materials or tools to enable them to deliver certain elements of the security training. These air carriers stated that although they requested the additional information or tools, TSA responded that air carriers were responsible for identifying and providing the required tools needed to deliver the security training. A labor union organization official stated that relying on training organizations and air carriers to develop the training materials “perpetuates the disparate quality and breadth of training available throughout the industry, which does little to assure a common strategy approach to securing United States skies.” Additionally, officials from 4 of the 19 air carriers we interviewed expressed concerns that TSA did not take into consideration that some air carriers do not have the expertise and personnel to conduct the annual basic self-defense training. TSA responded that basic self-defense training is legislatively required and the Federal Air Marshal Service, FBI, and other agencies are willing to work with the air carriers on their overall flight and cabin crew security training. TSA officials further stated that the air carriers should have an established line of communication with these agencies, but if the air carriers are seeking a point of contact, TSA would provide agency contact information. According to a Federal Air Marshal Service official, a Federal Air Marshal Service liaison meets periodically with the air carriers and aviation industry associations representing the air carriers and crew members to discuss overall communications including flight and cabin crew training issues. Furthermore, 9 of 27 stakeholders (air carriers, associations representing air carriers, and crew member labor organizations) we interviewed were concerned about the lack of definition, guidance, and clarity for parts of the revised security training guidance and standards. For example, the crew member security training standards require that crew members demonstrate proficiency in various security training elements, such as the use of protective and restraining devices and proper conduct of a cabin search. However, the standards do not define proficiency. Officials from a crew member labor organization stated that without clear, measurable training objectives for individual air carrier training departments to determine crew member proficiency and training objectives, the likelihood that training quality and content will vary from air carrier to air carrier increases. TSA training officials stated that air carriers, in conjunction with their training departments, are required to develop a method for determining crew member proficiency in the required training elements. TSA officials stated that the air carriers developed the training program, not TSA, and are therefore in the best position to define proficiency. TSA officials stated that their training staff’s review of the training materials include verifying that there are opportunities built into the training for flight and cabin crew members to demonstrate proficiency in the required elements. TSA officials further stated that air carriers should have the latitude to tailor their desired level of proficiency for the various standards to their individual operations. We found, however, that without standards for proficiency, which commonly serve as criteria for success in training programs, TSA will only be able to document training activity, and not the results of the training, i.e., whether the intended knowledge was in fact transferred to the training participants at a level acceptable to TSA. TSA has not established strategic goals or performance measures for flight and cabin crew member security training, nor required air carriers to do so. GPRA requires that agencies use outcome-oriented goals and measures that assess results, effects, or impacts of a program or activity compared to its intended purpose. GPRA also requires federal agencies to consult with key stakeholders—those with a direct interest in the success of the program—in developing goals and measures. Strategic goals explain the results that are expected from a program and when to expect those results. These goals should be expressed in a manner that could be used to gauge success in the future. Performance measures (indicators used to gauge performance) are meant to cover key facets of performance and help decision makers assess program accomplishments and improve program performance. With respect to flight and cabin crew security training, strategic goals would represent the key outcomes that TSA expects air carriers to achieve in providing flight and cabin crew member security training, and performance measures would gauge to what extent air carriers are achieving these outcomes. TSA training officials stated that they decided not to develop strategic goals or performance measures because they view their role in the crew member security training program as purely regulatory—that is, monitoring air carriers’ compliance with the training guidance and standards established by TSA. In this regard, TSA is the regulatory agency responsible for determining whether the security training program is adequately preparing flight and cabin crew members for potential threat conditions. TSA training officials also stated that due to the varying nature of the air carriers’ training programs, TSA believes that it is the individual air carriers that are responsible for establishing goals and performance measures specific to their security training programs and for using the results to make program improvements. However, without overall strategic goals established by TSA in collaboration with air carriers, air carriers do not have a framework from which to develop their individual performance goals and measures. Furthermore, TSA has not explicitly required air carriers to develop performance goals and measures or provided them with guidance and standards for doing so. Without guidance and standards, the 84 individual air carriers may establish inconsistent performance goals and measures. Additionally, the absence of performance goals and measures for flight and cabin crew security training limits the ability of TSA and air carriers to fully assess the accomplishments of the flight and cabin crew member security training and to target program improvements. TSA has recently taken steps to improve its oversight of air carriers’ crew member security training. One step includes adding staff with training expertise to review air carriers’ crew member security training curriculums to determine whether there is evidence that each applicable training standard is being met. When we began our review, TSA’s review of air carriers’ crew member security training programs was solely the responsibility of the principal security inspectors. These TSA inspectors were responsible for conducting a regulatory review to determine whether air carriers’ crew member security training curriculums met the requirements set forth in the standards. Beginning in January 2005, TSA began using training staff with expertise in designing training programs to review the overall design of the air carriers’ crew member security training curriculum, how the information is to be conveyed, the expected setting of the practice environment, and the way in which the information is to be presented—and to ensure that the security training curriculum satisfies the required security training standards. TSA inspectors are responsible for identifying which standards apply to each of the air carriers, based on their knowledge of the air carrier’s flight operations, size of aircraft, and presence or absence of international routes. TSA officials stated that between January 2005, when the revised guidance and standards were issued, and August 2, 2005, the training staff were involved in the review of the 71 security training curriculums that had been submitted to TSA. In January 2005, TSA took another step to strengthen its review of air carriers’ flight and cabin crew member security training by developing a standard form for TSA inspectors and training staff to use to conduct and document their reviews of air carriers’ security training curriculums. Also, TSA developed an internal memorandum, dated January 5, 2005, that generally describes the review process TSA inspectors and training staff are to use when reviewing air carriers’ crew member security training curriculums. The standard form, which lists the required training elements, is used by TSA inspectors to document the requirements stated in the revised security training standards that apply to a particular air carrier, and by the training staff to verify that air carrier’s initial and recurrent training plans include the applicable requirements and to document their comments. Prior to the development of this form, there were no documented procedures for how the inspections were to be conducted or a standard form for TSA inspectors to use to document their reviews of air carriers’ crew member security training curriculums. Additionally, TSA lacked complete documentation of its reviews of air carriers’ security training. Specifically, although TSA officials stated that TSA inspectors reviewed all 84 air carriers’ revised security training curriculums in response to January 2002 guidance and the corresponding standards, TSA was only able to provide us documentation related to 11 reviews. The Comptroller General’s Standards for Internal Control in the Federal Government states that agencies should document all transactions and other significant events and should be able to make this documentation readily available for examination. With the development of a standard form for reviewing air carriers’ security training curriculums in January 2005, TSA was able to provide us with documentation for all 18 of the reviews of air carriers’ security training curriculums that TSA inspectors and training staff had conducted between January 2005 and April 20, 2005. Additionally, in January 2005, TSA began requiring air carriers to obtain participant feedback at the end of crew member security training. According to our human capital work, participant feedback can be useful in providing the agency with varied perspectives on the effect of the training. However, TSA training officials stated that they are not certain how, if at all, they will use the participant feedback in conducting oversight of air carriers’ crew member security training programs. TSA officials stated that it is the responsibility of the individual air carriers to assess the results of participant feedback and to make changes to improve the security training as necessary. In May 2005, TSA training officials acknowledged that it would be useful for its inspectors to review participant feedback on an annual basis to assess flight and cabin crew members’ views of their air carriers’ security training programs and to identify trends within and across air carriers. The official acknowledged that these results could provide TSA inspectors information they could use to prioritize their reviews of air carriers’ crew member security training. However, the official stated that reviewing the participant feedback is a resource intensive process that also requires a certain level of expertise and is not feasible for TSA to undertake at this time. Without plans for reviewing participant feedback, TSA is not making use of available information on possible deficiencies in the quality of air carriers’ security training programs or identifying best practices that could be shared. Furthermore, TSA is taking steps to address a staffing shortage among its TSA inspector workforce to enable greater monitoring of air carriers’ flight and cabin crew member security training. Specifically, on April 1, 2005, TSA reorganized its inspection staff into a newly created Office of Compliance. TSA officials stated that this reorganization should help address the staffing shortfalls that previously existed. TSA also issued position announcements in an effort to fill vacant inspector positions. TSA officials stated that they had about 23 TSA inspectors onboard when the inspection function transferred from FAA to TSA in February 2002. As of February 2005, TSA had 15 inspectors onboard, 5 of whom were in the position for less than 5 months. Between January 2004 and September 2004, the TSA inspector workforce ranged from about 7 to 14 inspectors. TSA officials stated that a number of these staff subsequently left TSA because of advancement opportunities within the Department of Homeland Security and personal reasons. As part of TSA’s monitoring efforts, TSA inspectors periodically visit air carriers to observe classroom delivery of flight and cabin crew member security training and to review air carrier records documenting flight and cabin crew member completion of required security training. TSA officials stated that with the existing inspector workforce, they were only able to conduct observations of about 25 of air carriers’ classroom delivery of flight and cabin crew member security training during fiscal year 2004. Although TSA is not required to observe the classroom delivery of all air carriers’ flight and cabin crew member security training on an annual basis, TSA officials stated that these observations allow them to determine whether security training is being delivered consistent with air carriers’ approved security training curriculums and to identify potential problems with the training delivery. While TSA has taken steps to strengthen its oversight of air carriers’ crew member security training, TSA has not fully developed procedures for monitoring this training. TSA is required by law to monitor and periodically review air carriers’ security training to ensure that the training is adequately preparing crew members for potential threat conditions. The Comptroller General’s Standards for Internal Control in the Federal Government calls for controls generally to be designed to assure that ongoing monitoring occurs during the course of normal operations and that transactions and other significant events be documented clearly and the documentation be readily available for examination. We identified weaknesses in TSA’s controls in these areas with regard to monitoring and reviewing air carriers’ flight and cabin crew security training. First, although TSA recently developed a standard form for its inspectors and training staff to use in reviewing air carriers’ flight and cabin crew member security training, TSA has not developed procedures for completing this form. TSA officials acknowledged that there are no documented procedures or criteria for staff to use to complete the standard form or for determining which standards apply to individual air carriers and whether or not to approve an air carrier’s security training curriculum. The lack of written procedures may result in inconsistent assessments of the air carriers’ security training curriculums and inconsistent application of the standards to air carriers. Formal procedures for reviewing air carriers’ flight and cabin crew security training could provide standardization when TSA inspectors and training staff assess the air carriers’ security training curriculum. Second, TSA does not have documented procedures for conducting and documenting observations of air carriers’ classroom delivery of flight and cabin crew member security training. During fiscal year 2004, according to TSA officials, TSA inspectors visited about 25 air carriers to observe crew member security training and review files, such as records documenting crew members’ completion of required security training. TSA officials stated that they did not have sufficient resources to visit all 84 air carriers to observe their security training. We requested records documenting TSA inspectors’ visits to air carriers to assess the completeness and consistency of these reviews. However, TSA officials stated that they were unable to provide us with the requested documentation. Without written procedures to guide TSA inspectors in observing security training and assessing the results of their observations, its inspectors may not conduct comprehensive and consistent reviews. Additionally, without a mechanism for documenting and maintaining TSA inspectors’ reviews of air carriers’ security training delivery in a standard format, TSA lacks the ability to track the results of these reviews and identify patterns, including strengths and weaknesses, in training delivery within and across air carriers. In June 2005, a TSA official stated that TSA inspectors will monitor at least one flight and cabin crew member training class per year to ensure the curriculum is being followed. TSA inspectors are to provide the results of the monitoring to the principal operations inspector via memo or email. TSA officials stated that inspections of monitoring crew member security training will be maintained in a database, but TSA has not established a time frame for completing this database or documented procedures for this process. Additionally, although the law requires TSA to consider complaints from crew members in determining when to review air carriers’ flight and cabin crew member security training programs, TSA does not have procedures for considering such complaints. TSA inspection officials stated that they were not aware of any instances in which crew members had complained to TSA about security. However, in the event that TSA does receive complaints from crew members in the future, it is important that TSA have established and documented procedures to inform its inspectors of how to consider the complaints in reviewing air carriers’ security training programs. TSA officials stated that complaints from flight and cabin crew members will be directed to their Office of Transportation Security Policy for review and all decisions regarding flight and cabin crew member security training program modifications or policy changes will be evaluated and disseminated by this office. The officials also stated that if the complaints involve the training delivery process, the TSA inspectors may be required to increase the frequency of on-site inspections based upon an evaluation of the seriousness of the complaints that are received. TSA officials stated that they plan to develop a handbook for its inspectors and guidance for its training staff to use in monitoring and reviewing air carriers’ flight and cabin crew member security training to help provide assurance that standardized monitoring occurs. However, TSA has not established a time frame for completing these efforts. In December 2004, as required by law, TSA implemented an advanced voluntary crew member self-defense training program for flight and cabin crew members after obtaining stakeholder input. Participation in the voluntary training course has been relatively low, with only 474 flight and cabin crew members (39 percent of total capacity) attending the training during the first 7 months of the program. TSA training officials attributed the low participation to crew members having a difficult time in obtaining 3 consecutive days of leave to enable them to participate in the training. Additionally, although TSA incorporated some stakeholder concerns into the course design, some stakeholders, including individuals identified as experts by TSA and our own analysis identified concerns regarding the training design and delivery, including the training’s voluntary nature, the setting’s lack of realism, the training’s lack of recurrence, and the instructor’s lack of knowledge of crew members’ actual work environment. TSA has not developed performance measures for the program or established a time frame for evaluating the program’s overall effectiveness, including the effectiveness of the training design and delivery. TSA developed and implemented an advanced voluntary self-defense training program for flight and cabin crew members in consultation with key stakeholders by December 12, 2004, as required by law. TSA consulted with law enforcement personnel, security experts with expertise in self- defense training, representatives of air carriers, flight attendants, labor organizations representing flight attendants, terrorism experts, Federal Air Marshal Service officials, and educational institutions offering law enforcement training programs, in developing the self-defense training program and determining how to apply the training elements specified by law. According to TSA officials, in 2002, in anticipation of having to develop a mandatory self-defense training program as required by the Homeland Security Act, TSA established a working group comprised of law enforcement experts, Federal Air Marshals, and other subject matter experts, such as aviation security experts and self-defense/martial arts training experts, to assess what elements should be included in the training. This working group collaborated on the overall program design and delivery, including the program goals and objectives and the course content and delivery method. The working group’s efforts were placed on hold in 2003 when TSA was advised that legislation would be enacted to make the training a voluntary program to be provided by TSA, rather than a mandatory training program to be delivered by individual air carriers. After the enactment of Vision 100 in December 2003, TSA continued its efforts to develop an advanced voluntary self-defense training program until the program’s official implementation in December 2004, building on the input of the initial working group. The overall goal of the advanced voluntary crew member self-defense training, as defined by TSA, is to enable crew members to develop a higher level of competency in self- defense tactics to prevent or reduce the possibility of injury or death to one’s person or the takeover of an aircraft. TSA also established several objectives for the training, including recognize potential threats before an act of violence occurs; interpret behaviors that lead to potential hostile acts; conclude appropriate courses of action crew members must take to avert hostile actions intended to injure crew members or passengers or to take over an aircraft; and apply appropriate individual self-protection measures and self-defense tactics to prevent or reduce the possibility of injury or death to one’s person or the takeover of an aircraft. Prior to implementing the voluntary training in December 2004, TSA piloted the prototype advanced voluntary self-defense training in August and September 2004 in five cities with major airline hubs and refined the training based on comments from participants. The participants provided positive feedback in four areas, including (1) the repetitive moves taught throughout the course made the self-defense tactics easy to learn; (2) the training prepared them mentally and physically to defend themselves and provided a good foundation in self-defense; (3) the small class size and instructor to student ratio of 1 to 8 was conducive to a productive learning environment; and (4) the location of the training facility and lodging was well received. TSA also received feedback on changes that could be made to enhance the training. Table 3 provides a summary of the stakeholders’ concerns on TSA’s prototype advanced voluntary self-defense training and actions taken by TSA in response to the concerns. As of June 2005, a total of 474 crew members had participated in the training in 51 classes. During the initial deployment of the advanced voluntary crew member self-defense training in December 2004, participation was only about 14 percent of the total capacity being utilized, and about 38 percent of total enrolled participants actually attended the training course. Participation increased in January through March 2005, but declined in April through June 2005, with only 23 percent of total capacity utilized in June 2005. TSA plans to offer 46 additional advanced voluntary self-defense training courses during the remainder of fiscal year 2005 in 10 cities. According to TSA officials, TSA estimated that approximately 21,700 crew members will participate in the training, based on information obtained from air carrier associations and crew member labor organizations. Table 4 provides information on flight and cabin crew member participation in the advanced voluntary self-defense training from December 2004 through June 2005. TSA officials stated that the low participation rate in December 2004 was largely due to the short advance notice they provided stakeholders regarding the training. TSA announced the availability of the December 2004 training courses 6 days before the training was to take place. This short notice significantly restricted the ability of flight and cabin crew members to participate in the training because they generally submit their scheduling bids 30 days prior to their work schedule. TSA attributed the short notice provided to the length of time it took to reallocate funds from other TSA programs to enable implementation of the advanced voluntary self-defense training program. TSA provided crew members with more than 30 days notice for the remaining sessions. However, TSA training officials stated that participation continued to remain low in January 2005 due to crew members calling in sick, crew members being called for flight duty at the last moment, and inclement weather. TSA training officials stated that although they projected that crew member participation would increase over time, crew member participation decreased in June 2005 due to crew members having a difficult time obtaining 3 consecutive days of leave to attend training. TSA training officials also stated that based on their experience with the Federal Flight Deck Officer training program, crew members’ ability to obtain leave for the purpose of attending training may be associated with seasonal variances, with low participation usually occurring during the spring and summer months. Stakeholders, including air carriers and crew member labor organizations, attributed the low participation to crew members having to attend the training on their own time and pay the cost of travel, lodging, and meals. TSA training officials stated that they were continuing to gather information from flight and cabin crew members through the training contractor in an effort to identify the causes for the low participation and, ultimately, to try to address these causes. Stakeholders, including individuals that TSA identified as subject matter experts, and our own analysis identified concerns with the design and delivery of the advanced voluntary crew member self-defense training. These concerns include the training’s voluntary nature, the setting’s lack of realism, the training’s lack of recurrence, and the instructor’s lack of knowledge of crew members’ actual work environment. These same concerns were identified by stakeholders in response to the prototype self- defense training. As previously stated, our prior human capital work has found that in implementing a training program, an agency should ensure that implementation involves effective and efficient delivery of training— that is, the training should be conducted in a setting that approximates the participants’ working conditions and taught by individuals who are knowledgeable about the subject matter and work environment. In the case of the advanced voluntary self-defense training program, 13 of the 33 stakeholders expressed concerns about the voluntary nature of the training and stated that the training should be mandatory. Six of these 13 stakeholders stated that the program’s voluntary nature is inconsistent with TSA’s revised security training guidance that seeks to establish a common strategy that would enable individuals involved in an incident onboard an aircraft to know what others involved will be thinking and doing. These same stakeholders stated that because the training is not mandatory, if some crew members have had the self-defense training while others have not, a breakdown in communication could occur. TSA training officials stated that because the security training standards require all crew members to receive training on how to communicate and coordinate during a disturbance, they are not concerned about the voluntary nature of the self-defense training program. Additionally, 14 of the 33 stakeholders expressed concerns about the lack of a realistic training setting during the delivery of advanced voluntary crew member self-defense training. The self-defense techniques are taught in an open-space setting, unlike the narrow aisles crew members have to work within on an actual aircraft. In the two training sessions we observed in two different cities, participants had to be constantly reminded by the instructors of the restricted training space because participants repeatedly made defensive moves, such as spins and wide kicks, which could not be performed inside an aircraft cabin. Our prior human capital work has found that for learning to be successful, the training environment— training facility and equipment—should be favorable to successful learning. TSA officials stated they examined the possibility of purchasing aircraft simulators for the self-defense training and found that it would cost TSA about $100,000 per simulator. Officials stated that they have advised the instructors to try to create a setting, using chairs, tape, or other means, to simulate the narrow aisles on an aircraft. We informed TSA that in the two training sessions we attended, instructors did not use these techniques. TSA officials stated that they would follow up with the instructors to ensure they use these techniques. Fifteen of the 33 stakeholders also expressed concerns about the lack of recurrent self-defense training given that self-defense skills are difficult to sustain if not consistently practiced over time. Stakeholders stated that a 3-day, one-time self-defense training course would not enable crew members to develop proficiency in self-defense. TSA officials responsible for developing the advanced voluntary training program stated that the self-defense training is not intended to make participants proficient in self- defense. Rather, the training is intended to enable crew members to develop a higher level of competency in self-defense tactics by extending their knowledge and skills in the use of self-defense techniques and improvised weapons. TSA officials also stated that the key benefit of the training is a change in the mindset of participants that enables a greater awareness of threat conditions onboard an aircraft and in their daily lives. Additionally, they stated that it is the responsibility of the individual participants to practice the various self-defense techniques they were taught. Furthermore, they stated that although TSA is not currently offering a recurrent training program, it has not ruled out the possibility of recurrent training in the future. Eleven of the 33 stakeholders also expressed concerns that the self-defense training could give participants a false sense of security. For example, two stakeholders stated that the false sense of security lies in the participants who take the course once and expect to be skilled and proficient using the self-defense techniques without realizing that they may not be capable of following through when an incident occurs. Finally, 6 of the 33 stakeholders, including subject matter experts and crew member labor organizations, identified concerns about the lack of knowledge instructors had about the crew members’ actual working environment. While some stakeholders commended the instructors for their technical knowledge of crew members’ actual work environment, others expressed concerns that some instructors lacked technical knowledge and expertise of the aviation industry. For example, a training participant we interviewed stated that the instructor did not understand how safety devices onboard an aircraft operate. The instructor suggested inflating an emergency raft while inflight to protect the flight deck. However, according to the training participant, inflating an emergency raft in flight could injure or kill passengers and crew members. Our prior human capital work found that the use of instructors who are knowledgeable of the subject matter and experienced in aviation industry issues can help provide assurance that instructors can effectively transfer these skills and knowledge to others. TSA officials stated that they advised the training contractor to hire instructors with law enforcement, martial arts/self-defense, and aviation backgrounds. Additionally, TSA provided the instructors with training on aviation terminology so instructors could better communicate with the students throughout the course. TSA officials stated that they were aware of the stakeholder concerns regarding the self-defense training course. The officials stated that their ability to address these concerns is limited by funding constraints and competing priorities. TSA officials further stated that they will continue to work with the contractor that is delivering the training to obtain any information that would be beneficial to the design and implementation of the training program. TSA has not yet developed performance measures for the advanced voluntary crew member self-defense training program or established a time frame for evaluating the program’s overall effectiveness. Our prior human capital work on best practices in training has found that generally, high-performing organizations evaluate the effectiveness of their training programs and use the results to target performance improvements. In February 2005, TSA began conducting end-of-course evaluations— participant feedback—of the training and is planning to assess these evaluations to ensure the training is consistently achieving results over time. Additionally, TSA will use the results to modify the training, if appropriate. Although these evaluations should enable TSA to assess participants’ views on the training facilities, materials, and instructors, they will not enable TSA to determine whether the training increased the participants’ knowledge and skills. TSA officials stated that they recognize the importance of developing performance measures and evaluating the effectiveness of the program to ensure that it is consistently achieving the goals and target performance improvements. Although TSA plans to undertake these efforts, it has not established time frames for doing so. TSA officials stated that the numerous internal process improvements currently under way in TSA that compete for time and resources will affect how soon the agency can establish performance measures and conduct an evaluation of the training program. Without performance measures and an evaluation of the program’s overall effectiveness, TSA will not have meaningful information with which to determine whether the training program is actually enabling crew members to develop a higher level of competency in self-defense tactics—the intended goal of the training program. It has been less than 4 years since TSA assumed responsibility for aviation security. During this period, TSA implemented numerous initiatives to strengthen the various layers of security in commercial aviation. These efforts have largely focused on passenger and checked-baggage screening—among the first lines of defense in preventing terrorist attacks on commercial aircraft. TSA has recently taken steps to ensure that flight and cabin crew members—the last line of defense—are prepared to handle potential threat conditions onboard commercial aircraft. The revised guidance and standards TSA developed for air carriers to follow in developing and delivering their flight and cabin crew member security training is a positive step forward in strengthening security onboard commercial aircraft. However, guidance and standards alone do not provide assurances that the training delivered by air carriers is achieving TSA’s intended results. TSA views its role in flight and cabin crew member security training as regulatory, and that air carriers are responsible for measuring the success of their individual training programs. We agree that air carriers have responsibility for assessing the effectiveness of their training programs. However, we believe that overall responsibility for ensuring that flight and cabin crew members are prepared to respond to terrorist threats must be shared between the air carriers and TSA. In supporting this partnership, TSA should establish strategic goals for the flight and cabin crew security training program so that air carriers can develop their security programs, and measure the effectiveness of these programs, based on desired results, or goals, clearly defined by TSA. Without strategic goals to inform air carriers of what is expected from their training programs, and in the absence of guidance and standards to help ensure that air carriers establish consistent performance goals and measures, it will be difficult for TSA and the air carriers to gauge the success of training programs over time and to determine how to direct improvement efforts most effectively. Additionally, while we are encouraged by the recent steps TSA has taken to improve its monitoring and review of air carriers’ security training programs, without enhanced controls, such as written procedures for TSA staff to follow in conducting and documenting their reviews, TSA lacks reasonable assurance that its monitoring and review efforts will be conducted in a consistent and complete manner. Furthermore, a key source of information on the effectiveness of air carriers’ security training is participant feedback on the training. TSA’s recent requirement that air carriers obtain written feedback from flight and cabin crew members at the end of security training is a step in the right direction. However, without a process in place for considering this information during its oversight efforts, TSA is not effectively utilizing available information that could assist it in prioritizing and focusing its monitoring and review activities. Through developing and implementing the advanced voluntary self- defense training program, TSA took another step forward in its efforts to prepare flight and cabin crew members to handle potential threat conditions onboard commercial aircraft. However, TSA has not yet established performance measures or a timeframe for evaluating the effectiveness of the training program, including the training design and delivery. Congress enacted the Government Performance and Results Act of 1993 to focus the federal government on achieving results and providing objective, results-oriented information to improve congressional decision making. Without performance measures or a method for evaluating the effectiveness of the training, TSA may not have information with which to systematically assess the program’s strengths, weaknesses, and performance. Performance measures and an evaluation of the program’s effectiveness can assist TSA in focusing its improvement efforts and provide Congress with information to assess the impact of an advanced voluntary self-defense training program. To help provide TSA management with reasonable assurance that its security training guidance and standards for flight and cabin crew members are preparing crew members for potential threat conditions, and to enable TSA and air carriers to assess the accomplishments of the security training and target program improvements, we recommend that the Secretary of the Department of Homeland Security direct the Assistant Secretary, Transportation Security Administration, to take the following three actions: establish strategic goals for the flight and cabin crew member security training program, in collaboration with air carriers, and communicate these goals to air carriers to explain the results that are expected from the training; develop guidance and standards for air carriers to use in establishing performance goals and measures for their individual flight and cabin crew member security training programs to help ensure consistency in the development of goals and measures; and review air carriers’ goals and measures as part of its monitoring efforts to help ensure that they are linked to strategic goals established by TSA and to assess whether the training programs are achieving their intended results. To strengthen TSA’s internal controls and help ensure that air carriers are complying with TSA’s guidance and standards, we also recommend that the Assistant Secretary, Transportation Security Administration, establish a time frame for finalizing written procedures for monitoring and reviewing air carriers’ flight and cabin crew security training. These procedures should address the process for completing flight and cabin crew member curriculum review forms, determining which standards apply to individual air carriers and whether or not to approve an air carrier’s training curriculum, conducting and documenting observations of air carriers’ classroom delivery of security training, reviewing air carriers’ security training goals and measures, and considering security related complaints from flight and cabin crew members. As part of its efforts to develop written procedures, TSA should examine ways to incorporate participant feedback into its monitoring and review efforts. In addition, to help ensure that the advanced voluntary crew member self- defense training is achieving its intended results, we recommend that the Assistant Secretary, Transportation Security Administration, establish performance measures for the advanced voluntary crew member self- defense training program and a time frame for evaluating the effectiveness of the training, including the effectiveness of the training design and delivery. We provided a draft of this report to DHS for review and comment. On August 29, 2005, we received written comments on the draft report, which are reproduced in full in appendix III. DHS generally concurred with the findings and recommendations in the report, and agreed that efforts to implement our recommendations are critical to a successful flight and cabin crew member security training program. With regard to our recommendations that TSA establish strategic goals for the flight and cabin crew member security training program and develop guidance and standards for air carriers to use in establishing performance goals and measures for their individual flight and cabin crew member security training programs, DHS stated that TSA has begun efforts to establish strategic goals for the program and, air carriers would benefit from additional guidance—that is, guidance in addition to the flight and cabin crew security training standards—to use in establishing performance goals and measures for their individual flight and cabin crew security training programs. While TSA has established standards for air carriers’ to use in developing their flight and cabin crew security training, these standards do not include strategic goals for the training nor provide any guidance for establishing performance goals and measures. In addition, at the time of our review, TSA had not begun developing strategic goals for flight and cabin crew security training. Therefore, we cannot assess the extent to which the goals TSA is currently developing satisfy our recommendation. With respect to our recommendation that TSA establish a time frame for finalizing written procedures for monitoring and reviewing air carriers’ flight and cabin crew security training, DHS stated that TSA is in the process of developing a monitoring plan, to the extent that resources permit, and a handbook for reviewing air carriers’ flight and cabin crew member security training programs. DHS further stated that the handbook is currently under development and will be completed and ready for implementation in fiscal year 2006. Finally, regarding our recommendation that TSA establish performance measures for the advanced voluntary crew member self-defense training program and a time frame for evaluating the effectiveness of the training, DHS stated that TSA is working with OMB to establish performance measures for use in OMB’s Performance Assessment Rating Tool for flight security training and will finalize these measures using fiscal year 2005 data as the baseline. According to DHS, these measures will provide TSA with information that can be used in evaluating the effectiveness of the advanced voluntary crew member self- defense training. DHS also stated that TSA has begun to reach out to stakeholders to obtain feedback on this training. TSA’s successful completion of these ongoing and planned activities should address the concerns we raised in this report. We also provided relevant sections of this report to FAA, FBI, and DOD for their review, and incorporated their technical comments into the report as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 2 days from the date of this report. At that time, we will send copies of this report to the Secretary of the Department of Homeland Security and the Administrator of the Transportation Security Administration and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be made available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or berrickc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To determine the progress TSA has made in developing and monitoring flight and cabin crew security training, we examined TSA’s efforts to develop guidance and standards for air carriers’ flight and cabin crew security training, monitor air carriers’ compliance with the guidance and standards, and develop and deliver advanced voluntary self-defense training for crew members. Specifically, this report addresses the following questions: (1) What actions has TSA taken to develop guidance and standards for flight and cabin crew security training and to measure the effectiveness of the training? (2) How does TSA ensure domestic air carriers comply with required training guidance and standards? (3) What efforts has TSA taken to develop, implement, and measure the effectiveness of advanced voluntary self-defense training for flight and cabin crew members? To determine the actions TSA has taken to develop guidance and standards for flight and cabin crew security training and to measure the effectiveness of the training as well as how TSA ensures domestic air carriers comply with required training guidance and standards, we obtained and analyzed relevant legislation, guidance, and standards developed by TSA and FAA, and TSA records documenting its reviews of air carriers’ security training programs. We reviewed the security training guidance and standards to determine whether they contained the statutory requirements for flight and cabin crew security training of 49 U.S.C. § 44918, as established by the Aviation and Transportation Security Act, and as amended by the Homeland Security Act of 2002 and Vision 100— Century of Aviation Reauthorization Act. We also interviewed TSA training, policy, and inspections officials to identify their reasons for revising the guidance and standards and the process they used to revise these documents and ensure air carriers’ compliance with the guidance and standards. Additionally, we compared TSA’s process for monitoring and reviewing air carrier compliance with flight and cabin crew security training guidance and standards to standards for internal control in the federal government. To assess stakeholder involvement in the development of the guidance and standards and identify any stakeholder concerns, we interviewed officials from the FAA, FBI, FAMS, DOD, crew member labor organizations, and associations representing air carriers. At eight domestic air carriers we visited, we interviewed air carrier officials to obtain their views on the security training guidance and standards and TSA’s efforts to ensure air carriers’ compliance with the guidance and standards and to observe the flight and cabin crew initial or recurrent (refresher) security training. We selected these domestic air carriers based on whether they were currently offering initial and/or recurrent security training and on the size of the air carrier in an effort to include a mixture of various domestic air carriers and air carriers of varying sizes. The size of an air carrier is based on the annual operating revenues and the number of revenue passenger boardings. Finally, we conducted phone interviews with representatives of 11 additional domestic air carriers—which we selected using the same criteria we used to select the 8 air carriers to visit—to obtain their views on the flight and cabin crew member guidance and standards and TSA’s monitoring of air carriers’ compliance with these standards. Because we selected a nonprobability sample of domestic air carriers, the information we obtained from these interviews and visits cannot be generalized to all domestic air carriers. To determine the efforts TSA has taken to develop, implement, and measure the effectiveness of advanced voluntary self-defense training for flight and cabin crew members, we obtained and analyzed relevant legislation, TSA’s course training manual for the self-defense training, and feedback provided by flight and cabin crew members who participated in the prototype training. We also interviewed TSA training officials responsible for designing and implementing the voluntary advanced crew member self-defense training program. Additionally, we observed the final training in two cities. Furthermore, we interviewed relevant stakeholders, including representatives of air carriers; labor organizations representing flight attendants and pilots as well as individual flight attendants and pilots; aviation industry associations representing air carriers; individuals identified as subject matter experts or self-defense training experts; and federal officials at the FBI, FAMS, and FAA to determine whether TSA consulted them when developing the crew member self-defense training. We identified subject matter experts or self-defense training experts based on recommendations from TSA and crew member labor organizations. We also interviewed representatives of the 19 domestic air carriers mentioned above to obtain their views on the design and delivery of the advanced voluntary crew member self-defense training. We assessed the extent to which TSA incorporated stakeholder input into the training program, and the basis for TSA’s decisions on which stakeholder input to incorporate into the training. Finally, we assessed TSA’s efforts to develop the training programs relative to our guidance for assessing training and development efforts in the federal government. We conducted our work from June 2004 through August 2005 in accordance with generally accepted government auditing standards. In addition to the contact named above, Katherine Davis, Kimberly Gianopoulos, Sally Gilley, Stan Kostyla, Tom Lombardi, Gary Malavenda, Maria Strudwick, Carol Willett, and Su Jin Yon made key contributions to this report. | Training flight and cabin crew members to handle potential threats against domestic aircraft is an important element in securing our nation's aviation system. The responsibility for ensuring that crew members are prepared to handle these threats is a shared responsibility between the private sector--air carriers--and the federal government, primarily the Transportation Security Administration (TSA). This report addresses (1) actions TSA has taken to develop guidance and standards for flight and cabin crew member security training and to measure the effectiveness of the training, (2) how TSA ensures domestic air carriers comply with the training guidance and standards, and (3) efforts TSA has taken to develop and assess the effectiveness of its voluntary self-defense training program. Since the terrorist attacks of September 11, 2001, TSA enhanced guidance and standards for flight and cabin crew member security training with input from stakeholders. Specifically, TSA revised the guidance and standards to include additional training elements required by law and to improve the organization and clarity of the guidance and standards. Some stakeholders we interviewed and our own review generally found that the revised guidance and standards improved upon previous versions in terms of organization and clarity of the information provided. However, some stakeholders identified concerns about, for example, the reasonableness of applying parts of the guidance and standards to both flight and cabin crew members and the difficulty in implementing some of the standards without additional information or training tools from TSA. Additionally, TSA has not established strategic goals and performance measures for assessing the effectiveness of the training because it considers its role in the training program as regulatory. In this regard, TSA views the individual air carriers as responsible for establishing performance goals and measures for their training programs, but has not required them to do so. Without goals and measures, TSA and air carriers will be limited in their ability to fully assess accomplishments and target associated improvements. TSA recently took steps to strengthen its efforts to oversee air carriers' flight and cabin crew security training to ensure they are complying with the required guidance and standards. For example, in January 2005, TSA added staff with expertise in designing training programs to review air carriers' crew member security training curriculums and developed a standard form for staff to use to conduct their reviews. However, TSA lacks adequate controls for monitoring and reviewing air carriers' crew member security training, including written procedures for conducting and documenting these reviews. TSA plans to develop written procedures, but has not established a timeframe for completing this effort. TSA has developed an advanced voluntary self-defense training program with input from stakeholders and implemented the program in December 2004, as required by law. However, stakeholders and our own analysis identified concerns about the training design and delivery, such as the lack of recurrent training and the lack of a realistic training environment. Also, TSA has not yet established performance measures for the program or established a time frame for evaluating the program's effectiveness. |
TSA’s airport passenger checkpoint screening system is comprised of three elements: the (1) personnel, or screeners, responsible for operating the checkpoint, including the screening of airline passengers and their carry-on items; (2) standard operating procedures that screeners are to follow to conduct screening; and (3) technology used during the screening process. Collectively, these elements determine the effectiveness and efficiency of passenger checkpoint screening. In strengthening one or more elements of its checkpoint screening system, TSA aims to balance its security goals with the need to efficiently process passengers. We previously reported that TSA had made progress in enhancing its passenger checkpoint screening system by strengthening screener training, measuring the performance of screeners and the screening system, and modifying screening procedures to address terrorist threats and efficiency concerns. We made recommendations to DHS designed to strengthen TSA’s efforts to train screeners, modify screening standard operating procedures, and measure the performance of the checkpoint screening system. DHS generally agreed with our recommendations and TSA has taken steps to implement them. Passenger screening is a process by which screeners inspect individuals and their property to deter and prevent an act of violence or air piracy, such as the carriage of any unauthorized explosive, incendiary, weapon, or other prohibited item onboard an aircraft or into a sterile area. Screeners inspect individuals for prohibited items at designated screening locations. TSA developed standard operating procedures and the process for screening passengers at airport checkpoints. Figure 1 illustrates the screening functions at a typical passenger checkpoint. Primary screening is conducted on all airline passengers prior to entering the sterile area of an airport and involves passengers walking through a metal detector and carry-on items being subjected to X-ray screening. Passengers who alarm the walk-through metal detector or are designated as selectees—that is, passengers selected for additional screening—must then undergo secondary screening, as well as passengers whose carry-on items have been identified by the X-ray machine as potentially containing a prohibited item. Secondary screening involves additional means for screening passengers, such as by hand-wand, physical pat-down or, at certain airport locations, an ETP, which is used to detect traces of explosives on passengers by using puffs of air to dislodge particles from their body and clothing into an analyzer. Selectees’ carry-on items are also physically searched or screened for explosives traces by Explosives Trace Detection (ETD) machines. In addition, DHS S&T and TSA have deployed and are pursuing additional technologies to provide improved imaging or anomaly detection capacities to better identify explosives and other threat objects. DHS and TSA share responsibility for the screening of passengers and the research, development, and deployment of passenger checkpoint screening technologies. Enacted in November 2001, the Aviation and Transportation Security Act (ATSA) created TSA and charged it with the responsibility of securing civil aviation, which includes the screening of all passengers and their baggage. ATSA also authorized funding to accelerate the RDT&E of new checkpoint screening technologies. The Homeland Security Act of 2002, enacted in November 2002, established DHS, transferred TSA from the Department of Transportation to DHS and, within DHS, established S&T to have primary responsibility for DHS’s RDT&E activities, and for coordinating and integrating all these activities. The Intelligence Reform and Terrorism Prevention Act of 2004 (Intelligence Reform Act), enacted in December 2004, directed the Secretary of Homeland Security to give high priority to developing, testing, improving, and deploying checkpoint screening equipment that detects nonmetallic, chemical, biological, and radiological weapons and explosives, in all forms, on individuals and in their personal property. Until fiscal year 2006, TSA had primary responsibility for investing in the research and development of new checkpoint screening technologies, and was responsible for developmental and operational test and evaluation of new technologies. However, during fiscal year 2006, research and development functions within DHS were consolidated, for the most part, within S&T. After this consolidation, S&T assumed primary responsibility for funding the research, development, and developmental test and evaluation of airport checkpoint screening technologies. S&T also assumed responsibility from TSA for the Transportation Security Laboratory (TSL) which, among other things, tests and evaluates technologies under development. TSA, through the PSP that was transferred from the Federal Aviation Administration (FAA) to TSA, continues to be responsible for identifying the requirements for new checkpoint technologies; operationally testing and evaluating technologies in airports; and procuring, deploying, and maintaining technologies. This transfer of responsibility from TSA to S&T did not limit TSA’s authority to acquire commercially available technologies for use at the checkpoint. S&T and TSA’s RDT&E, procurement, and deployment efforts are made up of seven components: basic research, applied research, advanced development, operational testing, procurement, operational integration, and deployment. S&T is responsible for conducting basic and applied research, and advanced development, including developmental test and evaluation. TSA is responsible for conducting operational test and evaluation, operational integration, procurement and deployment of new technologies, including checkpoint screening technologies. These seven components are described below. Basic research includes scientific efforts and experimentation directed toward increasing knowledge and understanding in the fields of physical, engineering, environmental, social, and life sciences related to long-term national needs. Applied research includes efforts directed toward solving specific problems with a view toward developing and evaluating the feasibility of proposed solutions. Advanced development includes efforts directed toward projects that have moved into the development of hardware and software for field experiments and tests, such as acceptance testing. Operational test and evaluation verifies that new systems are operationally effective, supportable, and suitable before deployment. Operational integration is the process employed to enable successful transition of viable technologies and systems to the field environment. Procurement includes the efforts to obtain a product or service. Deployment is a series of actions following the determination that the product meets its requirements and is accepted by the program manager and integrated product team; designated locations are configured for product integration into the screening operating system and the installed product passes site acceptance tests; and logistics support is in place and all users are trained to use the product. Over $795 million has been invested by DHS and TSA during fiscal years 2002 through 2008 for the RDT&E, procurement, and deployment of checkpoint screening technologies. During this time, over $91 million was invested in the RDT&E of checkpoint technologies and about $704 million was invested in the procurement and deployment of these technologies. From fiscal years 2002 through 2005, TSA was responsible for the RDT&E of checkpoint technologies; however, TSA officials could not identify the amount of funding the agency invested for these purposes during those years. After fiscal year 2005, TSA invested $14.5 million for test and evaluation of checkpoint technologies, but did not fund the research and development of these technologies because responsibility in general for research and development funding was transferred from TSA to S&T beginning in fiscal year 2006. Therefore, during fiscal years 2006 through 2008, S&T invested $77.0 million in the RDT&E of checkpoint screening technologies. All of the approximately $704 million for the procurement and deployment of checkpoint screening technologies from fiscal years 2002 through 2008 was invested by TSA because the agency has been responsible for procurement and deployment of these technologies since it was created. Risk management is a tool that policy makers can use to help ensure that strategies to develop protective programs and allocate resources target the highest priority security needs. This information helps officials determine which security programs are most important to develop and fund, given that it is not possible to protect the country against all threats because of limited resources. Law and related policy, including the Intelligence Reform Act, the Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Commission Act), and Homeland Security Presidential Directive 7, provide that federal agencies with homeland security responsibilities are to apply risk-informed principles to prioritize security needs and allocate resources. Consistent with these provisions, DHS issued the National Strategy for Transportation Security in 2005 that, among other things, describes the policies that DHS is to apply when managing risks to the security of the U.S. transportation system. Further, in June 2006, DHS issued the NIPP, which provides a risk management framework to guide strategies to develop homeland security programs and allocate resources to them. According to the NIPP, its risk management framework consists of six phases that help to identify and assess risks and prioritize investments in programs, as illustrated in figure 2. The NIPP designated TSA as the primary federal agency responsible for coordinating critical infrastructure protection efforts within the transportation sector. A risk-informed strategy to develop and invest in critical infrastructure protection, according to the NIPP, begins with setting security goals. Setting security goals involves defining specific outcomes, conditions, end points, or performance targets that collectively constitute an effective protective posture. Once security goals are established, decisionmakers are to identify what assets or systems to protect and identify and assess the greatest risks to them, that is, the type of terrorist attack that is most likely to occur and that would result in the most severe consequences. Risk of a terrorist attack, according to the NIPP, is to be assessed by analyzing consequences of an attack; the threat—that is, the likelihood of an attack; and the extent to which an asset or a system, in this case the transportation system, is vulnerable to this type of attack. The potential consequences of any incident, including terrorist attacks and natural or manmade disasters, is the first factor to be considered in a risk assessment. In the context of the NIPP, consequence is measured as the range of loss or damage that can be expected in the event a terrorist attack succeeds. A consequence assessment looks at the expected worst case or reasonable worst case impact of a successful attack. A threat assessment is the identification and evaluation of adverse events that can harm or damage an asset and takes into account certain factors, such as whether the intent and capability to carry out the attack exist. A vulnerability assessment identifies weaknesses or characteristics of an asset or system, such as its design and location, which make it susceptible to a terrorist attack and that may be exploited. This analysis should also take into consideration factors such as protective measures that are in place which may reduce the risk of an attack and the system’s resiliency, that is, ability to recover from an attack. Once the three components of risk—threat, vulnerability, and consequence—have been assessed for a given asset or system, they are used to provide an estimate of the expected loss considering the likelihood of an attack or other incident. According to the NIPP, calculating a numerical risk score using comparable, credible methodologies provides a systematic and comparable estimate of risk that can help inform national and sector-level risk management decisions. To be considered credible, the NIPP states that a methodology must have a sound basis; be complete; be based on assumptions and produce results that are defensible; and specifically address the three variables of the risk calculus: threat, vulnerability, and consequence. The methodology should also be comparable with other methodologies to support a comparative sector or national risk assessment. To be comparable, a methodology must be documented, transparent, reproducible, accurate, and provide clear and sufficient documentation of the analysis process and the products that result from its use. The next steps in the DHS risk management framework involve establishing priorities for program development based on risk assessments; implementing these protective programs; and measuring their effectiveness by developing and using performance measures. Identifying and assessing risks helps decisionmakers to identify those assets or systems that are exposed to the greatest risk of attack and, based on this information, prioritize the development and funding of protective programs that provide the greatest mitigation of risk given available resources. The NIPP notes that because resources are limited, risk analysis must be completed before sound priorities can be established. To determine which protective measures provide the greatest mitigation of risk for the resources that are available, the NIPP directs policy makers to evaluate how different options reduce or mitigate threat, vulnerability, or consequence of a terrorist attack. To do so, the NIPP states that cost estimates are combined with risk-mitigation estimates in a cost–benefit analysis to choose between the different options. The last step in the NIPP, measuring the effectiveness of security programs by developing and using performance measures, provides feedback to DHS on its efforts to attain its security goals. Performance metrics are to be developed and used to affirm that specific goals and objectives are being met or to articulate gaps in the national effort or supporting sector efforts. Performance measures enable the identification of corrective actions and provide decisionmakers with a feedback mechanism to help them make appropriate adjustments in their strategies for protecting critical infrastructure. While TSA completed a strategic plan for the PSP in August 2008 that identifies a strategy for researching, developing, and deploying checkpoint screening technologies, the plan and the strategy were not developed based upon all of the key risk management principles outlined in DHS’s NIPP. For instance, TSA has not conducted a complete risk assessment for the PSP, conducted a cost–benefit analysis to prioritize investments, or developed performance measures to assess the extent to which the risk of attack has been reduced or mitigated by investments in technologies. While the agency is currently reviewing a draft of the Aviation Domain Risk Assessment (ADRA), as of September 2009, the ADRA had not been finalized. Officials expect it to be finalized by the end of calendar year 2009. TSA officials could not provide an expected completion date. Therefore, we could not determine when TSA will complete it or to what extent it will be consistent with DHS’s risk management framework. TSA officials acknowledged the importance of a cost–benefit analysis and performance measures to guide technology investments, and stated that they intend to develop them, but could not identify when they would be completed. Until TSA completes these activities, the agency lacks assurances that the PSP strategy addresses the highest priority needs and mitigates the risk of an attack. Further, TSA lacks information to adjust its strategy, if needed. TSA completed a strategic plan in August 2008 that identifies a strategy and establishes goals and objectives for the PSP, and submitted the plan to congressional committees in September 2008. However, TSA officials stated that the NIPP was not used as guidance in developing the plan. Instead, the officials stated that the specific requirements for a strategic plan, as outlined in the Intelligence Reform Act and 9/11 Commission Act, were used as guidance to construct the plan. The strategic plan identifies three broad trends that have been observed in the types of threats that TSA faces. First, interest in catastrophic destruction of aircraft and facilities has increased, in contrast to hijacking and hostage-taking that characterized the majority of earlier attacks. Second, the range of encountered weapons has expanded, many not previously recognized as threats, nor detected by the technologies that were deployed. Third, terrorists have attacked “soft” airport targets, including airport lobbies, in other countries. To address these challenges, TSA’s strategic plan identifies that the agency’s strategy is to utilize intelligence; partner with law enforcement, industry partners, and the public; and implement security measures that are flexible, widely deployable, mobile, and layered to address the nation’s complex open transportation network. According to the plan, TSA is in the process of implementing and evaluating a fundamental shift in strategy for the security checkpoint that encompasses the critical elements of people, process, and technology. In addition, the plan states that implementing a new security approach called Checkpoint Evolution, which started in the spring 2008, will bring the most significant changes that have occurred in passenger screening since the airport security checkpoint was first established in the 1970s. TSA’s strategic plan identifies that the key component of TSA’s strategy related to security checkpoints is to improve security effectiveness and resource utilization at the checkpoints. Also, the PSP manager stated that a goal of the PSP strategy is to achieve full operating capability by the dates discussed for each checkpoint screening technology listed in the strategic plan. To meet these goals, the PSP strategic plan identifies three strategic objectives: (1) improve explosive detection capability, (2) improve the behavior detection capability of Transportation Security Officers (TSO), and (3) extend the layers of security throughout the passenger journey. The first objective, improving explosive detection capability, involves combining new technology with procedures that emphasize an element of unpredictability to improve explosive detection capability and prevent would-be attackers from knowing the TSA security process. The second objective, improving the behavior detection capability of TSOs, involves shaping the checkpoint environment to better support and enhance behavior detection capabilities by enabling TSOs to engage a larger number of passengers more frequently throughout the checkpoint queue using informal interviews and SPOT; improving the observation conditions for TSOs trained in SPOT by enhancing the contrast between passengers exhibiting signs of travel stress and those intending to do harm to other passengers, aircraft, or the airport; and providing communications tools for enhanced coordination between TSOs trained in SPOT. The third objective, extending the layers of security throughout the passenger journey, involves enabling additional layers of non-intrusive security beyond the checkpoint and into public spaces; increasing the interaction between TSOs and passengers to provide more opportunities to identify irregular behaviors far ahead of the potential threat reaching the checkpoint; and partnering with airlines, airports, and the private sector to reduce vulnerabilities in soft target areas. TSA had been directed on multiple occasions to provide strategic plans for explosives detection checkpoint technologies to congressional committees. The Intelligence Reform Act mandated that TSA provide a strategic plan that included, at a minimum, a description of the current efforts to detect explosives on individuals and in their personal property; operational applications of explosive detection equipment at airport checkpoints; quantities of equipment needed to implement the plan and a deployment schedule; funding needed to implement the plan; measures taken and anticipated to be taken to provide explosives detection screening for all passengers identified for additional screening; and recommended legislative actions, if any. The Intelligence Reform Act mandated that such a strategic plan be submitted to congressional committees during the second quarter of fiscal year 2005. According to TSA officials, a strategic plan was developed and delivered to congressional committees on August 9, 2005, in satisfaction of the statutory mandate. However, the 9/11 Commission Act, enacted August 3, 2007, reiterated a requirement for a strategic plan that TSA was mandated to submit in accordance with the Intelligence Reform Act. Specifically, the 9/11 Commission Act required that the Secretary of Homeland Security issue a strategic plan addressing its checkpoint technology program not later than 30 days after enactment of the 9/11 Commission Act (that is, by September 3, 2007) and required implementation of the plan to begin within 1 year of the act’s enactment. In response to the 9/11 Commission Act, TSA provided to Congress the Aviation Security Report: Development of a Passenger Checkpoint Strategic Plan, September 2007. Finally, Division E of the Consolidated Appropriations Act, 2008, enacted on December 26, 2007, required that the Secretary of Homeland Security submit a strategic plan for checkpoint technologies no later than 60 days after enactment of the Act (that is, by February 25, 2008), and further restricted the use of $10,000,000 appropriated to TSA for Transportation Security Support until the Secretary submitted the plan to the Committees on Appropriations of the Senate and House of Representatives. As a result of the mandate for a strategic plan and the funding restriction in the 2008 Consolidated Appropriations Act, TSA officials told us that they interpreted this legislative language to mean that congressional committees considered TSA’s aviation security report in September 2007 to be incomplete and insufficient. After approximately 12 months had elapsed since a strategic plan had been mandated in the 9/11 Commission Act, in August 2008 TSA completed its revised strategic plan and delivered it to the committees in September 2008, which TSA officials stated meets the mandate for a strategic plan in the 9/11 Commission Act, as well as the mandate for a strategic plan in the appropriations act. As previously discussed, the Intelligence Reform Act included requirements for a deployment schedule, and descriptions of the quantities of equipment and funding needed to implement the plan. However, our analysis of TSA’s August 2008 strategic plan indicates that the strategic plan could include more complete information about these requirements. For example, although TSA provided some deployment information for each emerging checkpoint technology listed in the strategic plan—such as the total quantity to be deployed, expected full operating capability date, and types or categories of airports where the equipment is to be deployed—it does not include a year-by-year schedule showing the number of units for each emerging technology that is expected to be deployed to each specific airport. Regarding information on the funding needed to implement the strategic plan, it includes a funding profile for each fiscal year from 2007 through 2009. However, a number of the emerging technologies are not expected to reach full operating capability until fiscal year 2014. TSA officials stated that they have derived notional (that is, unofficial) quantities to be deployed on an annual basis for each technology through its respective full operating capability date, but the officials stated that the funding profile in the strategic plan does not reflect the funding needed for these future quantities because the funding that will be appropriated for them after fiscal year 2009 is unknown. According to the officials, to implement the strategic plan in the years beyond fiscal year 2009, the agency intends to use a year-by-year approach whereby the quantities to be deployed in a particular year, and the funding needed for that year, would not be officially identified prior to the budget request for that year. TSA officials stated that they used risk to inform the August 2008 strategic plan and the PSP strategy identified in it. Although TSA may have considered that risk to some degree, our analysis does not confirm that these efforts meet the risk-based framework outlined in the NIPP. Specifically, TSA has not conducted a risk assessment or cost–benefit analyses, or established quantifiable performance measures. As a result, TSA does not have assurance that its efforts are focused on the highest priority security needs, as discussed below. TSA has not conducted a risk assessment that includes an assessment of threat, vulnerability, and consequence, which would address passenger checkpoint screening; consequently, the PSP strategy has not been informed by such a risk assessment as required by the NIPP. Agency officials stated that they prepared and are currently reviewing a draft of a risk assessment of the aviation domain, known as the ADRA, which is expected to address checkpoint security and officials expect it to be finalized by the end of calendar year 2009; however, its completion has been delayed multiple times since February 2008. Therefore, it is not clear when this assessment will be completed. The ADRA, when completed, is to provide a scenario-based risk assessment for the aviation system that may augment the information TSA uses to prioritize investments in security measures, including the PSP. However, officials could not provide details regarding the extent to which the ADRA would assess threat, vulnerability, and consequence related to the passenger checkpoint. In 2004, we recommended that the Secretary of Homeland Security and the Assistant Secretary for TSA complete risk assessments—including a consideration of threat, vulnerability, and consequence—for all modes of transportation, and use the results of these assessments to help select and prioritize research and development projects. TSA and DHS concurred with the recommendation, but have not completed these risk assessments. Because TSA has not issued the ADRA or provided details regarding what it will entail, and because it is uncertain when the ADRA will be completed, it is not clear whether the ADRA will provide the risk information needed to support the PSP and TSA’s checkpoint technology strategy. In the meantime, TSA has continued to invest in checkpoint technologies without the benefit of the risk assessment information outlined in the NIPP. Consequently, TSA increases the possibility that its investments will not address the highest priority security needs. Although TSA has not completed a risk assessment to guide its PSP, officials stated that they identify and assess risks associated with the passenger screening checkpoint by relying on threat information, vulnerability information from Threat Image Projection (TIP) scores, limitations of screening equipment identified during laboratory testing, covert tests, and expert judgment to guide its investment strategy in the PSP. Specifically, TSA’s Office of Intelligence produces civil aviation threat assessments on an annual basis, among other intelligence products. These assessments provide information on individuals who could carry out attacks, tactics they might use, and potential targets. TSA’s most recent aviation threat assessment, dated December 2008, identifies that terrorists worldwide continue to view civil aviation as a viable target for attack and as a weapon that can be used to inflict mass casualties and economic damage. It also concluded that improvised explosive devices (IED) and hijackings pose the most dangerous terrorist threat to commercial airliners in the United States. The assessment identifies that these devices may be concealed on persons, disguised as liquids, or hidden within everyday, familiar objects such as footwear, clothing, toys, and electronics. The threat assessment further identifies that terrorists have various techniques for concealing explosives on their persons. In addition to the annual civil aviation threat assessment, the Office of Intelligence prepares for TSA’s senior leadership team and other officials a (1) daily intelligence briefing, (2) tactical intelligence report that is produced one to four times per week, (3) weekly field intelligence summary, (4) weekly suspicious incident report, and, when necessary, (5) special events update, for example, during major political events. However, according to the NIPP, relying on threat information is not sufficient to identify and assess risks. Rather, threat information, which indicates whether a terrorist is capable of carrying out a particular attack and intends to do so, is to be analyzed along side information on vulnerabilities—weakness in a system that would allow such an attack to occur—and on the consequences of the attack, that is, the results of a specific type of terrorist attack, according to the NIPP. TSA officials stated that, to guide the PSP, they also rely on programs in place that are designed to assess vulnerabilities at airport checkpoints. To identify vulnerabilities at airport checkpoints, TSA officials stated that TSA analyzes TIP scores, known limitations of screening equipment based on laboratory testing, and information from its covert testing program. TSA conducts national and local covert tests, whereby individuals attempt to enter the secure area of an airport through the passenger checkpoint with a prohibited item in their carry-on bags or hidden on their person. Officials stated they use these sources of information to identify needed changes to standard screening procedures, new technology requirements, and deployment strategies for the PSP. When a checkpoint vulnerability is identified, officials stated that TSA’s Office of Security Technology engages other TSA stakeholders through the PSP’s Integrated Project Team process to identify and develop necessary technology requirements which may lead to new technology initiatives. Officials credited this process with helping TSA identify needed changes to standard screening procedures and deployment strategies for new technologies. For example, according to a TSA official, a technology was developed as a result of tests conducted by GAO that found that prohibited items and components of an IED might be more readily identified if TSA were to develop new screening technologies to screen these items. Although TSA has obtained information on vulnerabilities at the screening checkpoint, the agency has not assessed vulnerabilities (that is, weaknesses in the system that terrorists could exploit in order to carry out an attack) related to passenger screening technologies that are currently deployed. The NIPP requires a risk assessment to include assessments of threat, vulnerability, and consequence. TSA has not assessed whether there are tactics that terrorists could use, such as the placement of explosives or weapons on specific places on their bodies, to increase the likelihood that the screening equipment would fail to detect the hidden weapons or explosives. Although TIP scores measure how effectively screeners identify prohibited items, they do not indicate whether screening technologies currently deployed may be vulnerable to tactics used by terrorists to disguise prohibited items, such as explosives or weapons, thereby defeating the screening technologies and evading detection. Similarly, TSA’s covert testing programs do not systematically test passenger and baggage screening technologies nationwide to ensure that they identify the threat objects and materials the technologies are designed to detect, nor do the covert testing programs identify vulnerabilities related to these technologies. We reported in August 2008 that, while TSA’s local covert testing program attempts to identify test failures that may be caused by screening equipment not working properly or caused by screeners and the screening procedures they follow, the agency’s national testing program does not attribute a specific cause of the test failure. We recommended, among other things, that TSA require the documentation of specific causes of all national covert testing failures, including documenting failures related to equipment, in the covert testing database to help TSA better identify areas for improvement. TSA concurred with this recommendation and stated that the agency will expand the covert testing database to document test failures related to screening equipment. Moreover, TSA officials stated that it is difficult to attribute a test failure to equipment, because there is a possibility that the threat item used for the test was not designed properly and, therefore, should not have set off the equipment’s alarm. TSA officials also stated that it is difficult to identify a single cause for a test failure because covert testing failures can be caused by multiple factors. As a result, TSA lacks a method to systematically test and identify vulnerabilities in its passenger and baggage screening equipment in an operational airport setting. Consequently, TSA officials do not have complete information to identify the extent to which existing screening technologies mitigate vulnerabilities at the passenger checkpoints, so that they can incorporate this information into the agency’s security strategy, as required by DHS guidance. TSA’s ADRA, once completed, is to cover the entire aviation domain and include three parts—assessments of over 130 terrorist attack scenarios to determine whether they pose a threat to the aviation system; an assessment of known vulnerabilities or pathways within the aviation system through which these terrorist attacks could be carried out; and an assessment of consequences of these various types of terrorist attacks, such as death, injury, and property loss. TSA officials stated that, through the use of expert panels, the ADRA will evaluate these threat scenarios to assess the likelihood that terrorists might successfully carry out each type of attack on the aviation system, and the likelihood and consequences of these various scenarios will be prioritized to identify the most pressing risks that need to be addressed. In the case of the passenger screening checkpoint, according to officials, TSA will be examining all security measures that a terrorist must breach in order to carry out a specific type of an attack, such as carrying an IED on board an aircraft and detonating it midflight. However, officials could not explain or provide documentation identifying the extent to which the ADRA will provide threat, vulnerability, and consequence assessments in support of the PSP. In addition, the completion date for the ADRA has been delayed multiple times. Because the ADRA has not been finalized and TSA has not described how the ADRA will address the passenger checkpoint, we could not determine the extent to which it will incorporate information on checkpoint vulnerabilities, including vulnerabilities associated with screening technologies and standard operating procedures. In addition to the ADRA, TSA and DHS S&T are developing other information that could inform their identification and assessments of risks to the aviation transportation system. Specifically, TSA and S&T are reviewing the scientific basis of their current detection standards for explosives detection technologies to screen passengers, carry-on items and checked baggage. As part of this work, TSA and S&T are conducting studies to update their understanding of the effects that explosives may have on aircraft, such as the consequences of detonating explosives on board an in-flight aircraft. Senior TSA and DHS S&T officials stated that the two agencies decided to initiate this review because they could not fully identify or validate the scientific support requiring explosives detection technologies to identify increasingly smaller amounts of some explosives over time as required by TSA policy. Officials stated that they used the best available information to originally develop detection standards for explosives detection technologies. However, according to these officials, TSA’s understanding of how explosives affect aircraft has largely been based on data obtained from live-fire explosive tests on aircraft hulls at ground level. Officials further stated that due to the expense and complexity of live-fire tests, FAA, TSA, and DHS collectively have conducted only a limited number of tests on retired aircraft, which limited the amount of data available for analysis. As part of this ongoing review, TSA and S&T are simulating the complex dynamics of explosive blast effects on an in-flight aircraft by using a computer model based on advanced software developed by the national laboratories. TSA believes that the computer model will be able to accurately simulate hundreds of explosives tests by simulating the effects that explosives will have when placed in different locations within various aircraft models. Officials estimated this work will be completed in 3- to 4-month increments through 2008 and 2009. Officials further stated that the prototype version of the model was validated in the late summer of 2008, and that the model is currently being used. TSA and S&T officials stated that they expect the results of this work will provide a much fuller understanding of the explosive detection requirements and the threat posed by various amounts of different explosives, and will use this information to determine whether any modifications to existing detection standards should be made moving forward. TSA has not completed a cost–benefit analysis to prioritize and fund the PSP’s priorities for investing in checkpoint technologies, as required by the NIPP’s risk management framework. According to the NIPP, policy makers who are designing programs and formulating budgets are to evaluate how different options reduce or mitigate threat, vulnerability, or consequence of a terrorist attack through a cost–benefit analysis that combines cost estimates with risk-mitigation estimates. However, in addition to lacking information on risks to the screening checkpoint, TSA has not conducted a cost–benefit analysis of checkpoint technologies being researched and developed, procured, and deployed. Such a cost– benefit analysis is important because it would help decisionmakers determine which protective measures, for instance, investments in technologies or in other security programs, will provide the greatest mitigation of risk for the resources that are available. One reason that TSA may have difficulty developing a cost–benefit analysis for the PSP is that it has not developed life-cycle cost estimates of each screening technology the PSP is developing, procuring, or deploying. This information is important because it helps decisionmakers determine, given the cost of various technologies, which technology provides the greatest mitigation of risk for the resources that are available. TSA officials prepared a PSP lifecycle cost estimate in September 2005, but this estimate does not include cost estimates for all technologies currently being researched, developed, tested and evaluated, procured and/or deployed, such as the Advanced Technology Systems, a technology to screen carry-on items that TSA is currently procuring. TSA was subsequently instructed by DHS Joint Requirements Council to complete lifecycle cost estimates for the PSP; in December 2005, the council reviewed the PSP and approved it to proceed to the Investment Review Board for an annual review and potential approval of the PSP’s fiscal year 2006 procurement strategy. However, the council expressed concern about several issues that should be resolved prior to the Investment Review Board’s review, including the need for complete lifecycle cost estimates for the checkpoint screening technologies that were to be developed and procured. TSA officials acknowledged that completing lifecycle cost estimates are important and stated that they have not prepared a lifecycle cost estimate since the council recommended that such an estimate be developed due to lack of staff. These officials further stated that TSA hired four full-time equivalent staff in fiscal year 2008, and two additional full-time equivalent staff are expected to be hired in the fall of 2008. The officials anticipate that these staff will help prepare lifecycle cost estimates. However, the officials did not provide a timeframe for the completion of the estimates. Although TSA officials identified the technologies they are procuring and deploying, TSA officials could not provide us with information on their priorities for the research and development of checkpoint screening technologies or the processes they followed to develop these priorities. According to S&T officials, TSA provided priorities for near-term applied research and development projects to the S&T Capstone Integrated Product Team (IPT) for Explosives Prevention. This IPT establishes priorities for research projects to be funded by S&T during the fiscal year. S&T officials stated that they rely on TSA and other members of the IPT to use a risk-based approach to identify and prioritize their agencies’ or offices’ individual research and development needs prior to submitting them for consideration to the IPT. However, TSA officials stated they did not submit priorities for research and development to S&T. Without cost– benefit or other analysis to compare the cost and effectiveness of various solutions, the agency cannot determine whether investments in the research and development of new checkpoint technologies or procedures most appropriately mitigate risks with the most cost-effective use of resources. In addition, without knowing the full cost of the technologies that the PSP is developing, procuring, or deploying, TSA could potentially invest in a technology in which the cost outweighs expected benefits. TSA’s strategy for the PSP does not have a mechanism—such as performance measures or other evaluation methods—to monitor, assess, or test the extent to which investments in new checkpoint technologies reduce or mitigate the risk of terrorist attacks. The NIPP requires that protective programs be designed to allow measurement, evaluation, and feedback based on risk mitigation so that agencies may re-evaluate risk after programs have been implemented and take corrective action if needed, such as modifying existing programs to counter new risks or implementing alternative programs. The NIPP identifies three types of performance measures—descriptive, process/output, and outcome measures—that can help gauge the effectiveness of protective programs. Although the NIPP requires that protective programs be designed to allow measurement, evaluation, and feedback based on risk mitigation, TSA has not identified quantifiable measures of progress which would allow the agency to assess the PSP’s overall effectiveness. TSA officials stated that they do not have overall performance measures but are currently developing performance goals and measures for the overall program. However, the officials could not provide a time frame for their completion. In September 2004, we recommended that TSA complete strategic plans for its research and development programs which contain measurable objectives. Without measures to monitor the degree to which the TSA’s investments in the research, development, and deployment of new screening technologies reduce or mitigate terrorist threats, the agency is limited in its ability to assess the effectiveness of the PSP or the extent to which it complements other layers of security at the checkpoint. Since TSA’s creation in 2001, 10 new checkpoint screening technologies, including the ETP, have been in various phases of RDT&E, procurement, and deployment, but TSA halted deployment of the ETP due to performance problems and high installation costs. Of the 10 technologies, TSA has initiated deployments for 4 of them, including the ETP and a Bottled Liquids Scanner, but TSA has not deployed any of the 4 technologies to airports nationwide. TSA also initiated procurements of two technologies, including the Whole Body Imager; however, deployment of these two technologies has not begun yet. Four checkpoint technologies are in research and development, such as a shoe scanning device. In June 2006, 6 to 11 months after TSA began to deploy the ETPs to airports, the agency halted their deployment due to performance problems—the machines broke down more frequently than specified by the functional requirements and the machines were more expensive to install and maintain in airports than expected. Because TSA did not follow its acquisition guidance that recommends technologies be tested and evaluated in an operational setting prior to procurement and deployment, the agency lacked assurance that the ETPs performed as required by the system’s requirements. Although TSA officials were aware that tests conducted on earlier ETP models during 2004 and 2005 suggested that they did not operate reliably in an airport environment and that the ETP models that were subsequently deployed to airports had not been tested in an operational environment to prove their effectiveness, TSA deployed the ETPs to airports beginning in July 2005 for the Smiths Detection ETP and beginning in January 2006 for the General Electric ETP without resolving these issues. TSA officials stated that they deployed the ETPs to respond quickly to the threat posed by a potential suicide bomber after suicide bombings had been carried out onboard Russian airliners in 2004. TSA officials stated that they plan to continue to use the 90 ETPs currently deployed to airports. Because the ETPs were deployed without resolving their performance problems and validating all of the functional requirements, the ETPs have not been demonstrated to increase security at the checkpoint. In the future, using validated technologies would enhance TSA’s efforts to improve checkpoint security. As a result of S&T and TSA investments in the RDT&E of checkpoint screening technologies since TSA’s creation in 2001, six new screening technologies are being procured and/or deployed, while four checkpoint screening technologies are currently in the research and development phase. Based on S&T and TSA RDT&E efforts, the agency has initiated deployments of four technologies—the ETP, Fido PaxPoint Bottled Liquids Scanner, Advanced Technology Systems, and Cast and Prosthesis Scanner—three of which originated as commercial-off-the-shelf technologies or commercial-off-the-shelf technologies that TSA modified for use as checkpoint screening devices. However, TSA has not completed the deployment for all of these four technologies to airports nationwide. TSA officials stated that they did not deploy additional checkpoint screening technologies because they were primarily focused on deploying explosives detection systems to screen checked baggage, as mandated by ATSA. TSA has also initiated procurements of two additional technologies—Automated Explosives Detection System for Carry-on Baggage and Whole Body Imager—but has not deployed either of them yet. Figure 3 describes the status of the six checkpoint screening technologies for which TSA has initiated procurement and/or deployment. According to TSA’s August 2008 strategic plan for checkpoint technologies, there are several other ongoing efforts in addition to the technologies discussed in figure 3. S&T and TSA are researching and developing a shoe scanning device that is to conduct automated weapons and explosive detection without requiring passengers to remove their footwear. TSA plans to award a contract in fiscal year 2010, with full operating capability in fiscal year 2015. TSA plans to deploy 1,300 units at all category X through category IV airports. TSA also has two ongoing efforts related to boarding pass and credential authentication, according to the agency’s strategic plan. Starting in 2007, TSA assumed responsibility from airline contractors for travel document checking, which is currently conducted manually. TSA plans to replace the manual system with an automated one. Specifically, the Boarding Pass Scanning System is expected to verify the authenticity of a boarding pass at the checkpoint and enable the use of paperless boarding passes by the airlines. In addition, the Credential Authentication Technology System is planned to be an automated system that authenticates identification presented by passengers and airport employees. According to TSA, the agency plans to eventually combine both of these authentication systems in a single travel document checking system. TSA plans to award a contract for these two systems in fiscal year 2009, with full operating capability expected in fiscal year 2014. TSA plans to deploy a total of 878 units to replace the existing document verification tools at all category X through category IV airports. Another ongoing effort identified in TSA’s strategic plan is the Next Generation ETD. This system is planned to replace legacy ETD systems and to be able to identify a larger range of explosives. Specifically, this system is expected to have enhanced explosive detection capability in terms of sensitivity and the ability to detect new threats, as well as other improvements over legacy systems, which are expected to produce lower lifecycle costs. TSA plans to deploy 1,500 units at all category X through category IV airports. TSA also has two additional efforts to assess possible technologies. One effort is called Standoff Detection, which is intended to display images to detect anomalies concealed under passengers’ clothing. TSA plans to conduct an operational utility evaluation of test article units during fiscal year 2009 to evaluate the technology’s feasibility within checkpoint screening operations. According to TSA, this technology would assist the agency in applying layered security prior to the checkpoint in soft target areas, such as airport lobbies, to improve early awareness of a potential explosive threat. If the technology proves effective in the checkpoint operation, TSA plans to award a contract in fiscal year 2010, with full operational capability expected by fiscal year 2014, and to deploy 351 units to every checkpoint at category X and category I airports. The other effort is called Explosives Characterization for Trace (Chemical-based) Detection. This effort includes the research and development of trace signatures, detection, and physical properties of explosives to improve the detection and performance of deployed explosives trace detection technologies. During 2004 and 2005, prior to deployment of the ETPs, TSA conducted a series of acceptance tests (that is, laboratory tests) of the General Electric and Smiths Detection ETPs that suggested they had not demonstrated reliable performance. Specifically, in 2004, TSA conducted acceptance tests on early models of the General Electric and Smiths Detection ETPs to determine whether the ETPs met key functional requirements. Subsequently, in 2004 a General Electric ETP model was field tested at five airports to determine how well the ETP performed in an operational environment. A Smiths Detection ETP model was also field tested at an airport in 2004. Based on initial test results, both vendors of the ETPs modified the machines, and TSA conducted further laboratory testing. The modified General Electric ETP was tested from December 2004 through February 2005. During the January 2005 to May 2005 time frame, both the General Electric and Smiths Detection ETP models were tested. Even though tests conducted during 2004 and 2005 of the General Electric and Smiths Detection ETPs suggested they had not demonstrated reliable performance, TSA deployed the Smiths Detection ETP and General Electric ETP to airports starting in July 2005 and January 2006, respectively, without resolving identified performance issues. Further, TSA did not test all 157 of the ETP’s functional requirements prior to procuring and deploying the General Electric and Smiths Detection ETP models. Instead, TSA tested the ETP models against a subset of the functional requirements. According to TSA’s System Development Life Cycle Guidance, testing of a system is to be conducted to prove that the developed system satisfies its requirements in the functional requirements document. TSA officials could not identify the specific requirements that were tested or the reason(s) that all of the requirements were not tested. A TSA official stated that TSA had intended to resolve problems regarding the ETPs’ performance after they had been deployed, but TSA officials could not explain how these problems were to be resolved. Officials further stated that they worked for over 1 year during 2006 and 2007 with the ETP vendors to correct reliability and maintenance issues after the ETPs were initially deployed, but could not resolve them. Furthermore, according to S&T officials, when TSA conducted limited field tests, the ETP manufacturers provided different configurations from those used during the laboratory tests. According to officials, once this was discovered, it took more than 6 months for the ETP manufacturers to recreate the configurations that had passed the laboratory tests. TSA officials stated that, during this 6-month period, the agency decided to award a sole source contract to General Electric to procure its ETP. Regarding the reliability of the ETPs, of the 101 ETPs (71 from General Electric and 30 from Smiths Detection) that were originally deployed to 36 airports, the General Electric ETP did not meet the system requirement for operational availability due to frequent breakdowns. Both vendors’ ETPs were also more expensive to maintain than expected, according to the TSA Chief Technology Officer serving during this period. The functional requirements document requires the ETP to be operationally available 98.38 percent of the time. However, the General Electric ETPs were not always able to meet this requirement. TSA officials could not provide information on the operational availability of the Smiths Detection ETPs. For the General Electric ETPs, from January through May 2006, they were operationally available an average of 98.05 percent of the time, although the ETPs met the operational availability requirement for 2 months during that period. Furthermore, TSA’s operational requirements specify that the ETP should function for a minimum of 1,460 hours between critical failures. A critical failure means that an ETP fails to operate and must be repaired as soon as possible. However, the TSA Chief Technology Officer at the time stated that the ETPs operated at a much lower average number of hours before a critical failure occurred because, for example, the dirt and humidity of some airport environments adversely affected the equipment. Specifically, from January 2006 through May 2006, the General Electric ETPs operated for an average of 559 hours before a critical failure, which means that these ETPs operated on average 38 percent of the time that they were required to operate before a critical failure occurred. TSA officials could not provide information on the mean time between critical failures for the Smiths Detection ETPs. TSA officials stated that they tested the ETPs in several airports for several months prior to deployment, but data from these tests did not identify a problem with mean time between critical failures. One reason for this, a TSA official stated, was that not enough data were collected during the field tests. As usage of the ETPs increased, officials stated that they discovered the ETP was not meeting operational availability requirements. The ETPs also required replacement filters and other consumables more often than expected, according to officials, which drove up maintenance costs. According to TSA officials, because of a variance in operational availability hours among the deployed ETPs, maintenance problems, and the high cost of ETP installation at airports, in June 2006, the agency halted the deployment of the ETP to additional airports and stopped the planned purchase of additional ETPs. TSA officials plan to continue to use the 90 ETPs currently deployed to airports. However, without validating that the ETPs meet their functional requirements, TSA officials do not have assurance that it is worthwhile to continue to use the ETPs in light of the cost to maintain and operate them. In addition, TSA officials are considering what to do with the ETPs that were procured and are currently in storage. As of April 2009, 116 ETPs were in storage. TSA did not follow the Acquisition Management System (AMS) guidance or a knowledge-based acquisition approach before procuring the ETPs, which contributed to the ETPs not performing as required after they were deployed to airports. Specifically, AMS guidance provides that testing should be conducted in an operational environment to validate that the system meets all functional requirements before deployment. In addition, our reviews have shown that leading commercial firms follow a knowledge-based approach to major acquisitions and do not proceed with large investments unless the product’s design demonstrates its ability to meet functional requirements and be stable. The developer must show that the product can be manufactured within cost, schedule, and quality targets and is reliable before production begins and the system is used in day-to-day operations. As discussed earlier in this report, TSA officials told us that they deployed the ETP despite performance problems because officials wanted to quickly respond to emergent threats. However, TSA did not provide written documentation to us that described the process used at the time to make the decision to deploy the ETP or the process that is currently used to make deployment decisions. TSA has relied on technologies in day-to-day airport operations that have not been demonstrated to meet their functional requirements in an operational environment. For example, TSA has substituted existing screening procedures with screening by the Whole Body Imager even though its performance has not yet been validated by testing in an operational environment. In the future, using validated technologies would enhance TSA’s efforts to improve checkpoint security. Furthermore, without retaining existing screening procedures until the effectiveness of future technologies has been validated, TSA officials cannot be sure that checkpoint security will be improved. DHS S&T and TSA coordinated and collaborated with each other and key stakeholders on their research, development, and deployment activities for airport checkpoint screening technologies, and DHS is taking actions to address challenges and strengthen these efforts. Because S&T and TSA share responsibilities related to the RDT&E, procurement, and deployment of checkpoint screening technologies, the two organizations must coordinate with each other and external stakeholders, such as airport operators and technology vendors. For example, in accordance with provisions of the Homeland Security Act and ATSA, S&T and TSA are to coordinate and collaborate with internal and external stakeholders on matters related to technologies and countermeasures for homeland security missions. S&T and TSA signed an MOU in August 2006 that establishes a framework to coordinate their work at the TSL, which tests and evaluates technologies under development. S&T also established a Capstone IPT for Explosives Prevention in 2006 to bring S&T, TSA, and U.S. Secret Service leadership together to identify gaps in explosives detection capability; prioritize identified gaps; review relevant, ongoing S&T programs; and develop capabilities to meet identified needs. However, inconsistent communication and the lack of an overarching test and evaluation strategy have limited S&T’s and TSA’s ability to coordinate effectively with one another. To coordinate with the aviation community, S&T and TSA have hosted industry days and conference calls to discuss new technologies with airport operators and technology vendors. Although TSA has taken actions to build partnerships with airport operators and vendors, it has not established a systematic process to coordinate with them related to checkpoint screening technologies. However, TSA officials stated that they are in the beginning stages of establishing a systematic process. S&T and TSA have taken actions to coordinate and collaborate with each other related to the RDT&E of checkpoint screening technologies, such as by communicating priorities and requirements for technologies and working with each other on the Capstone IPT for Explosives Prevention. However, S&T and TSA coordination and collaboration were not always effective due to inconsistent communication and the lack of an overarching test and evaluation strategy. The Homeland Security Act assigned responsibilities within the department for coordinating and integrating the research, development, demonstration, testing, and evaluation activities of the department, as well as for working with federal and private sector stakeholders to develop innovative approaches to produce and deploy the best available technologies for homeland security missions. The act further assigned S&T with responsibility for coordinating with other appropriate executive agencies in developing and carrying out the science and technology agenda of the department to reduce duplication and identify unmet needs. ATSA had also assigned TSA with coordination responsibilities, including the coordination of countermeasures with appropriate departments, agencies, and instrumentalities of the U.S. government. S&T and TSA have taken several actions to coordinate and collaborate on their research and development activities related to checkpoint screening technologies. First, to coordinate the transition of the TSL from TSA to S&T, minimize disruption of work, and prevent duplication of effort, S&T and TSA signed an MOU that defines the roles and responsibilities for the research and development of homeland security technologies, including checkpoint screening, and establishes a framework for how to coordinate their work. Additionally, S&T created the Capstone IPT for Explosives Prevention, which is co-chaired by the Assistant Secretary for TSA and the Director of the U.S. Secret Service, to identify and prioritize capabilities needed to detect explosives; review relevant, ongoing S&T programs; and develop capabilities to meet the identified needs. The IPT was first convened in December 2006 to identify research and development priorities for explosives detection technologies at airport checkpoints as well as for other transportation modes, and has met periodically since then. According to TSA officials, the Capstone IPT has enabled TSA to establish a clear understanding with S&T of TSA’s needs for technology solutions that meet stringent detection thresholds and throughput requirements to support the aviation sector. Additionally, the officials stated that the Capstone IPT has given TSA a better collective understanding of the technology needs of other DHS components, which will help DHS identify technology solutions that can be combined to benefit multiple users. Finally, to follow through on the priorities established by the Capstone IPT for Explosives Prevention, S&T officials stated that they established project-level IPTs, including one for airport checkpoints and one for homemade explosives. S&T officials stated that they are working with TSA on these project-level IPTs to try to meet the needs identified by the Capstone IPT. TSA officials further stated that they have PSP IPTs or working groups to coordinate on technology projects, establish program goals and objectives, and develop requirements and time lines. These groups meet on a weekly basis, according to TSA officials. In April 2008, S&T dissolved the IPT for explosives detection and replaced it with two separate IPTs, a transportation security IPT, chaired by TSA and a counter-IED IPT, chaired by the Office of Bombing Prevention within the National Protection and Programs Directorate and the United States Secret Service. Coordination and collaboration efforts between S&T and TSA have helped in identifying checkpoint screening solutions. For example, S&T and TSA officials collaborated on a hand-held vapor detection unit called the Fido PaxPoint. After the August 2006, discovery of the alleged plot to detonate liquid explosives on board commercial air carriers bound for the United States from the United Kingdom, S&T and TSA worked together to identify, develop, and test screening technologies to address this threat. According to TSA officials, S&T learned that the Department of Defense had developed a handheld unit that could detect vapors from explosives. S&T modified the Department of Defense handheld unit, resulting in the Fido PaxPoint unit to screen liquids and gels at airport checkpoints for explosives, and S&T helped TSA test and evaluate the device. Although S&T and TSA have taken steps to coordinate and collaborate with one another, inconsistent communication and a lack of an overarching test and evaluation strategy have contributed to coordination and collaboration challenges. Specifically, communication between S&T and TSA related to S&T’s basic and applied research efforts and TSA’s efforts to modify commercially available technologies has been lacking at times. For example, TSA officials stated that early in the TSL’s transition to S&T (that is, during fiscal year 2006), TSA did not receive information from S&T regarding which of TSA’s research and development needs S&T would fund, which projects related to airport checkpoint technologies were underway at the TSL, or the time frames to complete those projects. TSA officials stated that, without this information, TSA was unable to determine whether its work on modifying commercially available technologies for screening passengers and carry-on items unnecessarily duplicated S&T’s research and development efforts, although TSA officials were not aware of any duplication that occurred. An S&T official further stated that TSA had not consistently fulfilled its responsibility to provide clearly defined functional requirements for the equipment to be developed by S&T and tested by the TSL, nor has TSA consistently given sufficient notice to the TSL of TSA testing requests. Under the S&T and TSA MOU, TSA has retained responsibility to establish requirements for equipment certification and qualification and acceptance testing. Specifically, an S&T official at the TSL stated that TSA had inadequately defined the functional requirements and allowed too little time for testing several checkpoint screening technologies, including the Advanced Technology Systems, Enhanced Metal Detector II, and Bottled Liquids Scanner. A TSL official acknowledged that when the TSA was responsible for the TSL, the agency had not consistently developed requirements prior to testing or certification of equipment as required by the DHS guidance. In another example, as previously mentioned in this report, TSA is developing new certification standards and functional requirements for screening technologies, and is working with national laboratories to validate data on aircraft vulnerabilities and generate new computer models to help TSA develop requirements for explosives detection. According to the TSA Chief Technology Officer in 2007, the TSL has custody of the aircraft vulnerability data, but TSL officials had refused to release the data to the national laboratories as requested by TSA. Although the TSL later provided 32 of the 46 requested reports, TSA officials estimated that the TSL’s refusal to release all of the reports had delayed the effort to develop new certification standards and technology requirements by about 1 month. The officials added that most of TSA’s requests to S&T and the TSL had involved similar problems and that, although the MOU provides a framework for coordination, these types of problems are related to day-to-day operations and will have to be resolved as situations arise. According to S&T and TSA officials, senior-level management turnover at S&T and TSA contributed to these communication difficulties, as well as an S&T reorganization which began in August 2006 with the arrival of a new Under Secretary for Science and Technology. S&T officials further stated that, prior to the establishment of the PSP working groups, there was no mechanism for S&T and TSA to communicate information about priorities, funding, or project timelines. However, through the working groups, S&T officials stated that S&T and TSA are beginning to achieve regular communication and interaction at the working level, which allows for information to be shared in a mutually beneficial way. S&T and TSA officials also stated that communication with each other has improved since the MOU was signed in August 2006 and, in particular since the summer of 2007, although officials from both organizations stated that further improvement is needed. According to S&T officials, the TSL’s independent test and evaluation division and TSA have developed an effective working relationship for several programs, including the Whole Body Imager and Advanced Technology Systems. In addition, S&T officials stated that TSA has come to better understand the processes involving the Capstone IPT and identifying capability needs. According to TSA officials, the agency is in the process of determining whether a position within its Office of Security Technology should be established as a liaison with S&T to improve coordination between S&T and TSA. If the position is created, the TSA liaison would coordinate and collaborate with S&T officials on technology projects by assessing the science that supports the technologies. The MOU specifies that S&T and TSA will coordinate activities, including developing an integrated, overarching test and evaluation strategy for projects to ensure that test and evaluation functions are not duplicative, adequate resources are outlined and secured for these functions, and activities are scheduled to support the overall project master schedule. However, an overarching test and evaluation strategy for checkpoint technologies has not been developed. The lack of this strategy has presented coordination and collaboration challenges between S&T and TSA, and has resulted in the delay of some technologies. For example, a TSL official stated that the TSL could not accommodate TSA’s request to test the Advanced Technology Systems, in part, because TSA officials had not provided sufficient advance notice of their testing needs. TSA officials said they were working with S&T to develop a project master schedule for the Advanced Technology Systems. S&T and TSA officials stated that they plan to develop a test and evaluation strategy to define a coordinated technology transition process from S&T to TSA by outlining key responsibilities and criteria to initiate field evaluations of technologies, but officials could not tell us when the test and evaluation strategy would be completed. DHS, through S&T and TSA, coordinates with airport operators, private sector partners, such as technology vendors, and other federal agencies on matters related to research and development efforts. This coordination and collaboration between TSA and airport operators and technology vendors is important because the agency relies on airport operators to facilitate the deployment of equipment for testing and day-to-day operations, and on vendors to develop and manufacture new screening equipment. However, TSA does not have a systematic process to coordinate with external stakeholders related to checkpoint screening technologies, but TSA officials stated that the agency has developed a draft communications plan, which is being reviewed. Although TSA does not have a systematic process to coordinate with technology vendors, airport operators, and other stakeholders related to the RDT&E, procurement, and deployment of checkpoint screening technologies, agency officials stated that they plan to develop and implement such a process. Specifically, TSA officials stated that they have developed a draft communications plan, which is being reviewed, that will document the communications process. However, TSA could not provide an expected completion date for the plan. Although such a plan should help in providing consistency to the agency’s coordination efforts, without knowing the specific activities the plan will include or when it will be implemented, we cannot determine the extent to which the plan may strengthen coordination. In addition, in September 2007, TSA hired an Industry Outreach Manager within its Office of Security Technology to improve relationships with airport operators and communication with internal TSA stakeholders related to screening technologies, including checkpoint technologies. In general, the Industry Outreach Manager is the communications liaison for the Office of Security Technology stakeholders and customers to exchange ideas, information, and operational expertise in support of the office’s mission and goals, and to provide cutting-edge technologies in the most efficient and cost-effective means possible. In addition to these steps, in January 2007, S&T created a Corporate Communications Division to coordinate on a wide variety of science and technology efforts with public and private sector stakeholders. This office is in the process of developing a tool to assess the effectiveness of its outreach efforts to industry stakeholders. The AMS guidance recommends that TSA coordinate with airport operators to work out all equipment installation issues prior to deployment. According to TSA officials, the role of the airport operator is essential in ensuring that solutions under development are suitable for use in an airport environment, taking into consideration all logistical and operational constraints and possibilities. As described earlier, provisions of the Homeland Security Act address the need to coordinate research and development efforts to further homeland security missions, and reinforce the importance of coordinating and collaborating with airport operators. TSA sponsors monthly conference calls with airport operators to discuss issues of general interest and, according to S&T officials, S&T has conducted pilot studies with airport operators. However, according to many of the 33 airport operators we interviewed, TSA’s coordination on the priorities for and deployment of checkpoint screening technologies has been inconsistent. Specifically, of the 33 airport operators we interviewed, 8 had only positive comments about TSA’s coordination and 16 expressed only concerns regarding TSA’s coordination efforts, while 9 expressed both positive comments and concerns. Eleven of the 33 airport operators told us that TSA had not shared information with them regarding checkpoint technology needs and priorities. For example, an airport operator stated that TSA provided specifications for new screening technologies with sufficient lead time for the airport, which was building a new checkpoint at the time, and that TSA had numerous coordination meetings with airport officials to determine space constraints, power requirements, and other factors. However, this same airport operator expressed a desire for more coordination by TSA in the agency’s selection of the technologies to be pilot tested at this airport. Another airport operator stated that, when TSA asks for volunteers to participate in checkpoint screening technology pilot programs, it is difficult to agree to participate because TSA does not clearly communicate the program’s goals or the capabilities of the technology in the pilot program. According to airport operators at another airport, TSA officials told them that they would have the latitude to select the ETP from either of two vendors on the TSA contract for purchase. According to the airport officials, after they selected equipment from one of the vendors because it would fit into the physical layout of the airport’s checkpoint, TSA told the airport officials that particular ETP vendor was no longer under contract with TSA. As a result, airport officials stated that they had to redesign the checkpoint, including raising the ceiling, to accommodate the other vendor’s ETP. Senior officials in TSA’s Office of Operational Process and Technology, the office responsible for the development and implementation of security technologies across several modes of transportation, subsequently agreed that coordination with airport managers and other stakeholders could be improved. According to TSA officials, coordinating with technology vendors is essential in order to determine what technology platform would be appropriate and capable of providing the required detection and throughput capabilities. S&T and TSA have conducted outreach efforts to coordinate with technology vendors. For example, S&T officials stated that they have hosted forums known as industry days and attended conferences to discuss types of technologies needed to be developed and the department’s priorities for research and development. S&T officials also stated that they make presentations at technology-related conferences, symposia, and exhibits, highlighting the work conducted by S&T. At every industry day and conference, officials said, airport security and checkpoint screening technologies have been discussed. In addition, TSA has coordinated with technology vendors through industry days, individual meetings, and conferences. For example, TSA officials stated that TSA held industry days with technology vendors to provide a forum to communicate information to potential vendors on specific technology testing and procurement efforts, and to allow vendors to ask questions regarding technology projects and TSA expectations. Despite these outreach efforts, of the seven vendors we interviewed who had contracted with TSA to provide checkpoint screening technologies, officials from five vendors expressed concerns about the agency’s ability to coordinate with them on current or future needs for checkpoint technologies. Officials from four of the seven vendors stated that TSA had not communicated a strategic vision for screening technologies that will be needed at the checkpoint in the future, and that TSA did not effectively and clearly communicate standards and requirements for technologies to vendors. For example, just as TSL officials commented that TSA did not always provide clear and quantifiable requirements to conduct tests of screening technologies, vendors stated that TSA had not communicated effectively about its future needs, such as the operational requirements for an advanced, integrated checkpoint screening system. Therefore, a vendor official stated that some of them had taken the initiative to develop integrated screening technologies in the hope that TSA will eventually request this type of integrated system. TSA did not express an opinion regarding the specific concerns raised by the technology vendors, but a senior TSL official stated that TSA should sponsor better briefings for vendors after the agency announces its intentions to develop new technologies. The official stated that these briefings could provide vendors with an opportunity for open dialogue with TSA and clarification of TSA’s needs for new technologies. According to a vendor, without adequate coordination and communication from TSA, the vendors’ ability is limited in deciding how best to invest their resources to develop new checkpoint screening technologies. In addition to coordinating and collaborating with airport operators and technology vendors, S&T and TSA coordinate and collaborate on the department’s RDT&E efforts with other federal agencies through participation in the Technical Support Working Group, which is co-chaired by the Departments of Defense and State. The Technical Support Working Group is the U.S. national forum that identifies, prioritizes, and coordinates interagency research and development of technologies to combat terrorist acts, including explosives detection technologies. S&T also coordinates with the national laboratories on homeland security research. Specifically, S&T’s Office of National Laboratories coordinates homeland security-related activities and laboratory-directed research conducted within the Department of Energy’s national laboratories. According to an S&T senior official, S&T has worked with the national laboratories to supplement S&T’s research and development of explosives detection technologies by tasking the national laboratories to conduct basic research on the characteristics of homemade explosives. Researching, developing, testing and evaluating, procuring, and deploying checkpoint technologies capable of detecting ever-changing threats to the commercial aviation system is a daunting task. Although TSA has recently produced a strategic plan that identified a strategy for the PSP, neither the plan nor the agency’s strategy for researching, developing, and deploying checkpoint technologies was informed by some key risk management principles, including a risk assessment, cost–benefit analysis, and performance measures. Without conducting a risk assessment that includes all three elements of risk—threat, vulnerability, and consequence—and completing a cost–benefit analysis to guide the PSP strategy, TSA has limited assurance that its strategy targets the most critical risks and that it invests in the most cost-effective new technologies or other protective measures. Further, without developing performance measures that assess the extent to which checkpoint screening technologies achieve the PSP’s security goals and thereby reduce or mitigate the risk of terrorist attacks, TSA is limited in its ability to determine the success of its strategy and make needed adjustments. Even though TSA has not implemented a risk-informed strategy to ensure that its investments target the most pressing security needs, the agency has moved forward in investing in new checkpoint screening technologies. Despite limited progress in the RDT&E, procurement, and deployment of new checkpoint screening technologies during the first few years that S&T and TSA had responsibilities related to these technologies, more recently, the organizations have made progress as reflected by the number of technologies for which procurement and deployment has been initiated. TSA faced challenges with the first new technology that it procured and deployed—the ETP. In the interest of protecting the homeland, it is understandable that TSA may, at times, not follow all established guidance in an effort to deploy technologies quickly to address urgent threats and vulnerabilities. However, deploying the ETP despite unresolved performance concerns identified during testing of earlier ETP models, as well as failing to ensure that ETP models that were ultimately deployed had passed operational testing, increased the risk that the machines would not perform as intended, resulting in a questionable security benefit. TSA did not follow AMS guidance that recommended operational testing of a new technology prior to deployment because it is more cost effective to resolve performance issues then. While TSA deployed the ETPs to provide a much-needed capability to automatically screen higher risk passengers at airport checkpoints, relying on the ETPs could have resulted in airport checkpoints being more vulnerable given the ETPs’ performance problems and lack of operational testing. Also, relying on the ETPs to screen these particular passengers instead of existing screening procedures may not enhance airport checkpoint security because TSA does not know if ETP screening provides an improved detection capability compared to existing screening procedures. Moreover, it is risky to substitute any new technology for existing screening procedures before the technology has been proven to be effective through operational testing. Although TSA is trying to deploy new technologies to address immediate threats, the problems associated with the development and deployment of the ETPs may be repeated with other technologies unless TSA adheres to testing guidance and makes decisions using a knowledge-based acquisition approach. Finally, it is not clear whether it is worthwhile to continue to use the ETPs currently deployed to airports due to the costs associated with maintaining the machines in good, operational condition. To help ensure that DHS’s Science and Technology Directorate (S&T) and Transportation Security Administration (TSA) take a comprehensive, risk- informed approach to the RDT&E, procurement, and deployment of airport passenger checkpoint screening technologies, and to increase the likelihood of successful procurements and deployments of such technologies, in the restricted version of this report, we recommended that the Assistant Secretary for TSA take the following eight actions: Conduct a complete risk assessment, including threat, vulnerability, and consequence assessments, which would apply to the PSP. Develop cost–benefit analyses to assist in prioritizing investments in new checkpoint screening technologies. Develop quantifiable performance measures to assess the extent to which investments in research, development, and deployment of checkpoint screening technologies achieve performance goals for enhancing security at airport passenger checkpoints. After conducting a complete risk assessment and completing cost-benefit analyses and quantifiable performance measures for the PSP, incorporate the results of these efforts into the PSP strategy as determined appropriate. To the extent feasible, ensure that operational tests and evaluations have been successfully completed before deploying checkpoint screening technologies to airport checkpoints. Evaluate whether TSA’s current passenger screening procedures should be revised to require the use of appropriate screening procedures until it is determined that existing emerging technologies meet their functional requirements in an operational environment. In the future, prior to testing or using all checkpoint screening technologies at airports, determine whether TSA’s passenger screening procedures should be revised to require the use of appropriate screening procedures until the performance of the technologies has been validated through successful testing and evaluation. Evaluate the benefits of the Explosives Trace Portals that are being used in airports, and compare the benefits to the costs to operate and maintain this technology to determine whether it is cost-effective to continue to use the machines in airports. We provided a draft of our restricted report to DHS for review and comment. On April 7, 2009, DHS provided written comments, which are presented in Appendix II. In commenting on our report, DHS stated that it agreed with our recommendations and identified actions planned or underway to implement them. While DHS is taking steps to address our first and second recommendations related to conducting a risk assessment, the actions DHS reported TSA had taken or plans to take do not fully address the intent of the remaining six recommendations. In its comments, DHS stated that it concurred with our first recommendation that a risk assessment should be developed for the PSP and that TSA has two efforts currently underway to do so. Completion of TSA’s first effort—the Air Domain Risk Analysis (ADRA)—is expected in the winter of 2009. DHS commented that TSA’s second effort is the Risk Management and Analysis Toolset (RMAT), a model to simulate the potential of some technologies to reduce the risk of certain threat scenarios which will apply specifically to the passenger screening process. DHS reported that it expects initial results from RMAT to be available during the second quarter of 2009. DHS further stated that TSA has made resource allocation and technology decisions that were informed by consideration of risk (including threat, vulnerability, and consequence), although not by comparative assessments of these three elements. However, as we reported, TSA has not conducted a risk assessment for the PSP, and it is unclear to what extent the ADRA would provide risk information needed to support the PSP. Until such a risk assessment is developed and integrated into TSA’s strategy for the PSP, TSA continues to invest in checkpoint technologies without the benefit of a risk-informed strategy and increases the possibility that its investments will not address the highest-priority security needs. DHS also concurred with our second recommendation that it develop cost- benefit analyses. DHS commented that TSA is developing an approach for selecting cost-effective technologies by developing life-cycle cost estimates and using the RMAT tool to determine how technologies balance risk (based on current threats) with cost. TSA’s decision to collect cost and benefit information is a positive first step. Irrespective of how TSA collects data on the costs and benefits of technologies, it is important, as we reported, that TSA conduct cost-benefit analysis of each checkpoint technology that it invests in that weighs the costs and benefits of technologies relative to the costs and benefits of other solutions. Such analysis is important because it helps decision-makers determine whether investments in technologies or in other security programs will provide the greatest mitigation of risk for the resources that are available. DHS concurred with our third recommendation that TSA develop quantifiable performance measures to assess the extent to which TSA’s investments in checkpoint screening technologies make the checkpoint more secure, the key mission of the program. DHS commented that it currently collects quantifiable performance attributes for all potential acquisitions with regards to metrics, such as detection, false alarm rate, and operational availability and plans to use information on machines’ attributes as measures of the PSP’s overall effectiveness as a program. However, these actions will not fully address our third recommendation. First, information collected on potential acquisitions prior to their deployment may not reflect their performance in an operational environment; consequently, relying on information about technologies’ attributes rather than measuring the effectiveness of deployed technologies to secure the checkpoint will likely have limited value in terms of measuring the effectiveness of the PSP as a program. Second, as we reported, the ETP example illustrates that TSA did not collect information on the ETP’s performance attributes such as operational availability during laboratory testing prior to procurement and did not collect data on the ETP’s detection capabilities during tests in an operational environment. This raises questions about the completeness of data TSA collects on technologies prior to acquisition and deployment. We could not verify that TSA collects such information on other technologies because TSA did not provide documentation to support this comment. As TSA moves forward in developing performance measures, it is important that these measures reflect not only efficiency of the technologies to process passengers but the effectiveness of technologies and other countermeasures to make the checkpoint more secure and thereby reduce the risks posed by those most pressing threat scenarios that will be identified once TSA completes its risk assessment. In addition, DHS concurred with our fourth recommendation that it develop a PSP strategic plan that reflects the risk assessment, cost benefit analysis, and performance measures. DHS commented that TSA plans to combine results from the RMAT tool and lifecycle cost estimates for possible technology solutions that strike a balance between risk and efficient use of funding. DHS also stated it will use RMAT to develop proxy measures and general “what-if” analysis and risk insights. However, these actions alone will not satisfy the intent of this recommendation. While it is possible that proxy measures could be developed to assess the extent to which TSA’s investments in the research and development of technologies have achieved program goals of making the checkpoint more secure, to fully address this recommendation, TSA must also conduct a risk assessment that addresses the PSP, develop quantifiable measures that clearly assess the PSP’s progress towards its security goals, and revise its strategic plan accordingly. DHS concurred with our fifth recommendation that before deploying technologies to airport checkpoints, the technologies should successfully complete testing and evaluation and stated that TSA is taking action to implement a formal testing process. DHS commented that TSA has prepared a Test and Evaluation Master Plan (TEMP) that describes a new formal testing process that is consistent with DHS’s new acquisition directive. However, the TEMP does not address the intent of this recommendation. We deleted from this public report our evaluation of why the TEMP does not address the intent of this recommendation, because TSA determined our evaluation to be sensitive security information. Further, DHS agreed with our sixth and seventh recommendations that TSA evaluate whether its screening procedures should be revised to require the use of appropriate procedures until it can be determined that emerging technologies or future technologies that may be developed meet all of their requirements in an operational environment. However, DHS’s comments suggest that it does not intend to implement these recommendations. DHS commented that the performance of machines is always measured and confirmed in the laboratory setting prior to operational field testing. However, we disagree that laboratory testing is sufficient to address this recommendation. We deleted from this public report our evaluation of why laboratory testing alone does not address the intent of this recommendation, because TSA determined our evaluation to be sensitive security information. DHS stated that TSA implemented our eighth recommendation that the agency evaluate the benefits of the ETP, such as its effectiveness, and conduct a cost-benefit analysis to determine whether the technologies should remain in use at airports. However, we disagree that TSA has implemented this recommendation. DHS commented that two actions fulfilled this recommendation: TSA’s current program management reviews in which costs are periodically discussed with vendors and the laboratory testing of the ETP’s detection capabilities. To fully address this recommendation, a cost-benefit analysis and tests of the ETP’s effectiveness to detect explosives in an operational environment are required. As we reported, TSA has not conducted cost-benefit analyses, which, as noted earlier, should compare costs and benefits of alternative solutions. Discussions of maintenance costs with vendors on a periodic basis do not constitute a cost-benefit analysis. Based on DHS’s written comments, we deleted a reference to the 2004 OMB PART review in a footnote because of updated information from OMB’s 2008 PART review. DHS also provided us with technical comments, which we considered and incorporated in the report where appropriate. In particular, we clarified the wording of a recommendation which originally stated that TSA should develop quantifiable performance measures to assess the extent to which investments in research, development, and deployment of checkpoint screening technologies have mitigated the risks of a terrorist attack. We altered the wording to state that performance measures should be developed to assess progress towards security goals. As agreed with your offices, unless you publicly announce the contents of this report, we plan no further distribution for 45 days from the report date. At that time, we will send copies of this report to the Secretary of Homeland Security, the Assistant Secretary of the Transportation Security Administration, and appropriate congressional committees. If you or your staffs have any questions about this report, please contact me at (202) 512-8777 or LordS@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This report addresses the following questions: (1) To what extent has the Transportation Security Administration (TSA) developed a risk-informed strategy to prioritize investments in the research and development of passenger checkpoint screening technologies? (2) What new passenger checkpoint screening technologies has the Department of Homeland Security (DHS) researched, developed, tested and evaluated, procured, and deployed since its creation, and why did TSA halt the first technology deployment that it initiated—the Explosives Trace Portal (ETP)? (3) To what extent has DHS coordinated the research, development, test and evaluation (RDT&E), procurement, and deployment of passenger checkpoint screening technologies internally and with key stakeholders, such as airport operators and technology vendors? To determine the extent to which TSA has developed a risk-informed strategy to prioritize investments in the research and development of passenger checkpoint screening technologies, we analyzed program documents, TSA’s August 2008 strategic plan for checkpoint technologies, TSA’s September 2007 report on the development of a strategic plan, technology project plans, and funding. We also compared TSA’s strategic plan and DHS’s responses regarding their efforts to manage their research and development investments, with DHS’s guidance from the National Infrastructure Protection Plan on how to utilize risk management principles to target funding. To determine the extent to which DHS researched, developed, tested and evaluated, procured, and deployed new checkpoint screening technologies since its creation, and to identify why TSA halted deployment of the ETP, we analyzed TSA’s strategic plan for checkpoint technologies, TSA’s Passenger Screening Program (PSP) documentation, including information on the status of technologies being researched, developed, tested and evaluated, procured, and deployed. Regarding the ETPs, we analyzed the functional requirements for the system, contracts with General Electric and Smiths Detection, and test reports for acceptance tests, regression tests, and operational tests. We also reviewed ETP deployment schedules and documentation on operational availability and mean time between critical failure, and interviewed TSA officials about the reasons that the ETP deployment was halted. We also compared the ETP test approach used by S&T and TSA to the Acquisition Management System (AMS) guidance and knowledge-based acquisition best practices. We also interviewed TSA and S&T officials to obtain information on current investments in the research, development, and deployment of checkpoint technologies, and conducted site visits to the Transportation Security Laboratory in Atlantic City, New Jersey, and Tyndall Air Force Base, Florida, to observe testing of new checkpoint technologies. We visited the TSL because that is where S&T tests and evaluates technologies, including checkpoint screening technologies. We visited Tyndall Air Force Base because technologies to detect bottled liquids explosives were being tested there. Additionally, we analyzed TSA’s passenger screening standard operating procedures and interviewed various TSA headquarters officials, 29 Federal Security Directors, 1 Deputy Federal Security Director, and 5 Assistant Federal Security Directors for Screening, and visited nine airports where the ETPs had been or were to be deployed or new checkpoint screening technologies were undergoing pilot testing. We chose these officials because they are the senior official at the airport in charge of security and manage TSA’s role in deploying new technologies at the airport. We selected these nine locations based on the technologies that had been deployed or were being tested, their geography, size, and proximity to research and development laboratories. Of the nine airports we visited, the ETPs had been or were to be deployed to seven of them, and other new checkpoint screening technologies were undergoing pilot demonstrations or testing at two of them. We visited four airports on the east coast, and three airports on the west coast, and two airports located in the west and southwestern regions of the United States. To determine whether the ETP’s requirements had been tested prior to procuring and deploying them, we selected a non-probability sample of 8 out of the 157 total requirements. We selected the 8 requirements because they were related to some of the ETP’s key functionality requirements, including operational effectiveness, operational suitability, and passenger throughput. To determine the extent to which DHS has coordinated and collaborated on the RDT&E, procurement, and deployment of passenger screening technologies internally and with key stakeholders, we analyzed program documents, including an August 2006 memorandum of understanding between TSA and S&T for the management of the Transportation Security Laboratory (TSL). Additionally, we interviewed Department of State officials, TSA and S&T officials, seven checkpoint technology vendors, and airport operators and other officials at airports where ETPs were initially deployed. Because we selected nonprobability samples of airports to visit and officials to interview, we cannot generalize the results of what we learned to airports nationwide. However, the information we gathered from these locations and officials provided us with insights and perspectives on DHS’s efforts to operationally test and evaluate, and deploy checkpoint technologies that could only be obtained from officials stationed at locations where the technologies had been tested or deployed. We reviewed the Acquisition Management System, the Aviation and Transportation Security Act, the Homeland Security Act of 2002, and the Intelligence Reform and Terrorism Prevention Act and identified requirements and guidance for coordination and collaboration among S&T, TSA, and other stakeholders. We also reviewed S&T’s and TSA’s coordination activities and compared them to TSA program guidance and GAO’s recommended coordination practices regarding agency coordination with external stakeholders. We conducted this performance audit from June 2006 through April 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Robert Goldenkoff, Acting Director; E. Anne Laffoon and Steve Morris, Assistant Directors; and Joseph E. Dewechter, Analyst-in-Charge, managed this assignment. Carissa Bryant, Chase Cook, Orlando Copeland, Neil Feldman, and Ryan MacMaster made significant contributions to the work. Charles Bausell, Jr., Richard Hung, and Stanley Kostyla assisted with design, methodology, and data analysis. Michele Mackin assisted with acquisition and contracting issues. Sally Williamson, Linda Miller, and Kathryn Godfrey provided assistance in report preparation, and Thomas Lombardi provided legal support. | Since fiscal year 2002, the Transportation Security Administration (TSA) and the Department of Homeland Security (DHS) have invested over $795 million in technologies to screen passengers at airport checkpoints. The DHS Science and Technology Directorate (S&T) is responsible, with TSA, for researching and developing technologies, and TSA deploys them. GAO was asked to evaluate the extent to which (1) TSA used a risk-based strategy to prioritize technology investments; (2) DHS researched, developed, and deployed new technologies, and why deployment of the explosives trace portal (ETP) was halted; and (3) DHS coordinated research and development efforts with key stakeholders. To address these objectives, GAO analyzed DHS and TSA plans and documents, conducted site visits to research laboratories and nine airports, and interviewed agency officials, airport operators, and technology vendors. TSA completed a strategic plan to guide research, development, and deployment of passenger checkpoint screening technologies; however, the plan is not risk-based. According to TSA officials, the strategic plan and its underlying strategy for the Passenger Screening Program were developed using risk information, such as threat information. However, the strategic plan and its underlying strategy do not reflect some of the key risk management principles set forth in DHS's National Infrastructure Protection Plan (NIPP), such as conducting a risk assessment based on the three elements of risk--threat, vulnerability, and consequence--and developing a cost-benefit analysis and performance measures. TSA officials stated that, as of September 2009, a draft risk assessment for all of commercial aviation, the Aviation Domain Risk Assessment, was being reviewed internally. However, completion of this risk assessment has been repeatedly delayed, and TSA could not identify the extent to which it will address all three elements of risk. TSA officials also stated that they expect to develop a cost-benefit analysis and establish performance measures, but officials could not provide timeframes for their completion. Without adhering to all key risk management principles as required in the NIPP, TSA lacks assurance that its investments in screening technologies address the highest priority security needs at airport passenger checkpoints. Since TSA's creation, 10 passenger screening technologies have been in various phases of research, development, test and evaluation, procurement, and deployment, but TSA has not deployed any of these technologies to airports nationwide. The ETP, the first new technology deployment initiated by TSA, was halted in June 2006 because of performance problems and high installation costs. Deployment has been initiated for four technologies--the ETP in January 2006, and the advanced technology systems, a cast and prosthesis scanner, and a bottled liquids scanner in 2008. TSA's acquisition guidance and leading commercial firms recommend testing the operational effectiveness and suitability of technologies or products prior to deploying them. However, in the case of the ETP, although TSA tested earlier models, the models ultimately chosen were not operationally tested before they were deployed to ensure they demonstrated effective performance in an operational environment. Without operationally testing technologies prior to deployment, TSA does not have reasonable assurance that technologies will perform as intended. DHS coordinated with stakeholders to research, develop, and deploy checkpoint screening technologies, but coordination challenges remain. Through several mechanisms, DHS is taking steps to strengthen coordination within the department and with airport operators and technology vendors. |
In fiscal year 2009, the federal government spent over $4 billion specifically to improve the quality of our nation’s 3 million teachers through numerous programs across the government. Teacher quality can be enhanced through a variety of activities, including training, recruitment, and curriculum and assessment tools. In turn, these activities can influence student learning and ultimately improve the global competitiveness of the American workforce in a knowledge-based economy. Prior GAO reports have noted that sustained coordination among key federal education programs could enhance state efforts to improve teacher quality. Federal efforts to improve teacher quality have led to the creation and expansion of a variety of programs across the federal government. However, there is no governmentwide strategy to minimize fragmentation, overlap, or potential duplication among these many programs. Specifically, GAO identified 82 distinct programs designed to help improve teacher quality, either as a primary purpose or as an allowable activity, administered across 10 federal agencies. Many of these programs share similar goals. For example, 9 of the 82 programs support improving the quality of teaching in science, technology, engineering, and mathematics (STEM subjects) and these programs alone are administered across the Departments of Education, Defense, and Energy; the National Aeronautics and Space Administration; and the National Science Foundation. Further, in fiscal year 2010, the majority (53) of the programs GAO identified supporting teacher quality improvements received $50 million or less in funding and many have their own separate administrative processes. The proliferation of programs has resulted in fragmentation that can frustrate agency efforts to administer programs in a comprehensive manner, limit the ability to determine which programs are most cost effective, and ultimately increase program costs. For example, eight different Education offices administer over 60 of the federal programs supporting teacher quality improvements, primarily in the form of competitive grants. Education officials believe that federal programs have failed to make significant progress in helping states close achievement gaps between schools serving students from different socioeconomic backgrounds, because, in part, federal programs that focus on teaching and learning of specific subjects are too fragmented to help state and district officials strengthen instruction and increase student achievement in a comprehensive manner. While Education officials noted, and GAO concurs, that a mixture of programs can target services to underserved populations and yield strategic innovations, the current programs are not structured in a way that enables educators and policymakers to identify the most effective practices to replicate. According to Education officials, it is typically not cost-effective to allocate the funds necessary to conduct rigorous evaluations of small programs; therefore, small programs are unlikely to be evaluated. Finally, it is more costly to administer multiple separate federal programs because each program has its own policies, applications, award competitions, reporting requirements, and, in some cases, federal evaluations. While all of the 82 federal programs GAO identified support teacher quality improvement efforts, several overlap in that they share more than one key program characteristic. For example, teacher quality programs may overlap if they share similar objectives, serve similar target groups, or fund similar activities. GAO previously reported that 23 of the programs administered by Education in fiscal year 2009 had improving teacher quality as a specific focus, which suggested that there may be overlap among these and other programs that have teacher quality improvements as an allowable activity. When looking across a broader set of criteria, GAO found that 14 of the programs administered by Education overlapped with another program with regard to allowable activities as well as shared objectives and target groups (see figure 1). For example, the Transition to Teaching program and Teacher Quality Partnership Grant program can both be used to fund similar teacher preparation activities through institutions of higher education for the purpose of helping individuals from non-teaching fields become qualified to teach. Although there is overlap among these programs, several factors make it difficult to determine whether there is unnecessary duplication. First, when similar teacher quality activities are funded through different programs and delivered by different entities, some overlap can occur unintentionally, but is not necessarily wasteful. For example, a local school district could use funds from the Foreign Language Assistance program to pay for professional development for a teacher who will be implementing a new foreign language course, and this teacher could also attend a summer seminar on best practices for teaching the foreign language at a Language Resource Center. Second, by design, individual teachers may benefit from federally funded training or financial support at different points in their careers. Specifically, the teacher from this example could also receive teacher certification through a program funded by the Teachers for a Competitive Tomorrow program. Further, both broad and narrowly targeted programs exist simultaneously, meaning that the same teacher who receives professional development funded from any one or more of the above three programs might also receive professional development that is funded through Title I, Part A of ESEA. The actual content of these professional development activities may differ though, since the primary goal of each program is different. In this example, it would be difficult to know whether the absence of any one of these programs would make a difference in terms of the teacher’s ability to teach the new language effectively. In past work, GAO and Education’s Inspector General have concluded that improved planning and coordination could help Education better leverage expertise and limited resources, and to anticipate and develop options for addressing potential problems among the multitude of programs it administers. Generally, GAO has reported that uncoordinated program efforts can waste scarce funds, confuse and frustrate program customers, and limit the overall effectiveness of the federal effort. However, given the large number of teacher quality programs and the extent of overlap, it is unlikely that improved coordination alone can fully mitigate the effects of the fragmented and overlapping federal effort. In 2009, GAO recommended that the Secretary of Education work with other agencies as appropriate to develop a coordinated approach for routinely and systematically sharing information that can assist federal programs, states, and local providers in achieving efficient service delivery. Coordination is essential to ensure that programs do not work at cross-purposes, do not repeat mistakes, and do not engage in wasteful duplication of services. Education has established working groups to help develop more effective collaboration across Education offices, and has reached out to other agencies to develop a framework for sharing information on some teacher quality activities, but it has noted that coordination efforts do not always prove useful and cannot fully eliminate barriers to program alignment, such as programs with differing definitions for similar populations of grantees, which create an impediment to coordination. Congress could help eliminate some of these barriers through legislation, particularly through the pending reauthorization of the Elementary and Secondary Education Act of 1965 and other key education bills. Specifically, to minimize any wasteful fragmentation and overlap among teacher quality programs, Congress may choose either to eliminate programs that are too small to evaluate cost-effectively or combine programs serving similar target groups into a larger program. Education has already proposed combining 38 programs into 11 programs in its reauthorization proposal, which could allow the agency to dedicate a higher portion of its administrative resources to monitoring programs for results and providing technical assistance. Congress might also include legislative provisions to help Education reduce fragmentation, such as by giving broader discretion to the agency to move resources away from certain programs. Congress could provide Education guidelines for selecting these programs. For example, Congress could allow Education discretion to consolidate programs with administrative costs exceeding a certain threshold or failing to meet performance goals, into larger or more successful programs. Finally, to the extent that overlapping programs continue to be authorized, they could be better aligned with each other in a way that allows for comparison and evaluation to ensure they are complementary rather than duplicative. Federally funded employment and training programs play an important role in helping job seekers obtain employment. In fiscal year 2009, 47 programs spent about $18 billion to provide services, such as job search and job counseling, to program participants. Most of these programs are administered by the Departments of Labor, Education, and HHS. GAO has previously issued reports on the number of programs that provide employment and training services and overlap among them. In the 1990s, GAO issued a series of reports that identified program overlap and possible areas of resulting inefficiencies. In 2000 and 2003, GAO identified programs for which a key program goal was providing employment and training assistance and tracked the increasing number of programs. GAO recently updated information on these programs, found overlap among them, and examined potential duplication among three selected large programs—HHS’s Temporary Assistance for Needy Families (TANF) and the Department of Labor’s Employment Service and Workforce Investment Act (WIA) Adult programs. Forty-four of the 47 federal employment and training programs GAO identified, including those with broader missions such as multipurpose block grants, overlap with at least one other program in that they provide at least one similar service to a similar population. Some of these overlapping programs serve multiple population groups. Others target specific populations, most commonly Native Americans, veterans, and youth. Even when programs overlap, they may have meaningful differences in their eligibility criteria or objectives, or they may provide similar types of services in different ways. GAO examined the TANF, Employment Service, and WIA Adult programs for potential duplication and found they provide some of the same services to the same population through separate administrative structures. Although the extent to which individuals receive the same services from these programs is unknown due to limited data, GAO found these programs maintain parallel administrative structures to provide some of the same services such as job search assistance to low-income individuals (see figure 2). It should be noted that employment is only one aspect of the TANF program, which also provides a wide range of other services, including cash assistance. At the state level, the TANF program is typically administered by the state human services or welfare agency, while the Employment Service and WIA Adult programs are typically administered by the state workforce agency and provided through one-stop centers. Agency officials acknowledged that greater efficiencies could be achieved in delivering services through these programs but said factors such as the number of clients that any one-stop center can serve and one-stop centers’ proximity to clients, particularly in rural areas, could warrant having multiple entities provide the same services. Colocating services and consolidating administrative structures may increase efficiencies and reduce costs, but implementation can be challenging. Some states have colocated TANF employment and training services in one-stop centers where Employment Service and WIA Adult services are provided. Three states—Florida, Texas, and Utah—have gone a step further by consolidating the agencies that administer these programs, and state officials said this reduced costs and improved services, but they could not provide a dollar figure for cost savings. States and localities may face challenges to colocating services, such as limited office space. In addition, consolidating administrative structures may be time consuming and any cost savings may not be immediately realized. An obstacle to further progress in achieving greater administrative efficiencies is that little information is available about the strategies and results of such initiatives. In addition, little is known about the incentives that states and localities have to undertake such initiatives and whether additional incentives are needed. To facilitate further progress by states and localities in increasing administrative efficiencies in employment and training programs, we recommended in 2011 that the Secretaries of Labor and HHS work together to develop and disseminate information that could inform such efforts. This should include information about state initiatives to consolidate program administrative structures and state and local efforts to colocate new partners, such as TANF, at one-stop centers. Information on these topics could address challenges faced, strategies employed, results achieved, and remaining issues. As part of this effort, Labor and HHS should examine the incentives for states and localities to undertake such initiatives, and, as warranted, identify options for increasing such incentives. Labor and HHS agreed they should develop and disseminate this information. HHS noted that it lacks legal authority to mandate increased TANF-WIA coordination or create incentives for such efforts. To the extent that colocating services and consolidating administrative structures reduce administrative costs, funds could potentially be available to serve more clients or for other purposes. For the TANF program alone, GAO estimated that states spent about $160 million to administer employment and training services in fiscal year 2009. According to a Department of Labor official, the administrative costs for the WIA Adult program were at least $56 million in program year 2009. Officials told GAO they do not collect data on the administrative costs associated with the Employment Service program, as they are not a separately identifiable cost in the legislation. Labor officials said that, on average, the agency spends about $4,000 for each WIA Adult participant who receives training services. In periods of budgetary constraints, it is all the more important that resources are used effectively. Depending on the reduction in administrative costs associated with colocation and consolidation, these funds could be used to train potentially hundreds or thousands of additional individuals. This Committee has authority over a wide range of programs intended to help many of our neediest and most vulnerable citizens. With pending reauthorizations, it is an opportune time to consider options for addressing fragmentation, overlap, and potential duplication among these programs. In the past, Congress has taken a range of actions to address these issues that may help you as you seek approaches on how to proceed. Today, I would like to highlight 3 of these approaches: 1. enhancing program evaluations and performance information, 2. fostering coordination and strategic planning for program areas that span multiple federal agencies, and 3. consolidating existing programs or coordinating service delivery. Information about the effectiveness of programs can help guide policymakers and program managers in making tough decisions about how to prioritize the use of scarce resources and improve the efficiency of existing programs. However, there can be many challenges to obtaining this information. For example, it may not be cost-effective to allocate the funds necessary to conduct rigorous evaluations of small programs and as a result these programs are unlikely to be evaluated. As we have reported, many programs, especially smaller programs, have not been evaluated, which can limit the ability of Congress to make informed decisions about which programs to continue, expand, modify, consolidate, or eliminate. For example, We found that of 47 employment and training programs we identified, 23 have not had a performance study of any kind completed since 2004, and only 5 have had an impact study completed since 2004. We recommended that Labor comply with the requirement in the Workforce Investment Act of 1998 to conduct an impact evaluation of WIA services to better understand what services are most effective for improving outcomes. However, Labor has been slow to implement this requirement, and does not expect to complete the study until June 2015. In 2009, GAO reported that while evaluations have been done, or are under way, for about two-fifths of 23 programs we identified as being focused on teacher quality, little is known about the extent to which most programs are achieving their desired results. In 2010, GAO reported that there were 151 different federal K-12 and early education programs, but that more than half of these programs have not been evaluated, including 8 of the 20 largest programs which together accounted for about 90 percent of total funding for these programs. There are also other governmentwide strategies that may play an important role. Specifically, in January 2011, the President signed the GPRA Modernization Act of 2010 (GPRAMA), updating the almost two- decades-old Government Performance and Results Act (GPRA). Implementing provisions of the new act—such as its emphasis on establishing outcome-oriented goals covering a limited number of crosscutting policy areas—could play an important role in clarifying desired outcomes and addressing program performance spanning multiple organizations. Specifically, GPRAMA requires agencies to (1) disclose information about the accuracy and reliability of performance information (2) identify crosscutting management challenges, and (3) report quarterly on priority goals on a publicly available Web site. Additionally, GPRAMA significantly enhances requirements for agencies to consult with Congress when establishing or adjusting governmentwide and agency goals. OMB and agencies are to consult with relevant committees, obtaining majority and minority views, about proposed goals at least once every 2 years. This information can inform deliberations on spending priorities and help re- examine the fundamental structure, operation, funding, and performance of a number of federal education programs. However, to be successful, it will be important for agencies to build the analytical capacity to both use the performance information, and to ensure its quality—both in terms of staff trained to do the analysis and availability of research and evaluation resources. Where programs cross federal agencies, Congress can establish requirements to ensure federal agencies are working together on common goals. For example, Congress mandated—through the America COMPETES Reauthorization Act of 2010—that the Office of Science and Technology Policy (OSTP), develop and maintain an inventory of STEM education programs, including documentation of the effectiveness of these programs, assess the potential overlap and potential duplication of these programs, and develop a 5-year strategic plan for STEM education, among other things. In establishing these requirements, Congress put in place a set of requirements to provide information it can use to inform decisions about strategic priorities. Consolidating existing programs or coordinating service delivery are other options for Congress to address fragmentation, overlap, and duplication. In the education area, Congress consolidated several bilingual education programs into the English Language Acquisition State Grant Program as part of the 2001 ESEA reauthorization. As we reported prior to the consolidation, existing bilingual programs shared the same goals, targeted the same types of children, and provided similar services. In consolidating these programs, Congress gave state and local educational agencies greater flexibility in the design and administration of language instructional programs. Congress has another opportunity to address these issues through the pending reauthorization of the Elementary and Secondary Education Act of 1965. Specifically, to minimize any wasteful fragmentation and overlap among teacher quality programs, Congress may choose either to eliminate programs that are too small to evaluate cost effectively or to combine programs serving similar target groups into a larger program. In the employment and training area, Congress took steps to better coordinate service delivery for many employment and training programs when it enacted the Workforce Investment Act of 1998 (WIA). Specifically, WIA established one-stop centers in all states and mandated that numerous programs provide their services through the centers. In doing so, WIA sought to unify a fragmented employment and training system and create a single, universal system—a one-stop system that could serve the needs of all job seekers and employers. In conclusion, removing and preventing unnecessary duplication, overlap, and fragmentation among federal teacher quality and employment and training programs is clearly challenging. These are difficult issues to address because they may require agencies and Congress to re-examine within and across various mission areas the fundamental structure, operation, funding, and performance of a number of long-standing federal programs or activities. Implementing provisions of GPRAMA—such as its emphasis on establishing priority outcome-oriented goals, including those covering crosscutting policy areas—could play an important role in clarifying desired outcomes, addressing program performance spanning multiple organizations, and facilitating future actions to reduce unnecessary duplication, overlap, and fragmentation. Sustained attention and oversight by Congress will be critical also. As the nation rises to meet its current fiscal challenges, GAO will continue to assist Congress and federal agencies in identifying actions needed to address these issues. Likewise, we will continue to monitor developments in the areas we have already identified. Thank you, Mr. Chairman, Ranking Member Miller, and Members of the Committee. This concludes my prepared statement. I would be pleased to answer any questions you may have. For further information on this testimony please contact Barbara Bovbjerg, Managing Director, Education, Workforce, and Income Security, who may be reached at (202) 512-7215, or BovbjergB@gao.gov. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. Federal Education Funding: Overview of K-12 and Early Childhood Education Programs. GAO-10-51. Washington, D.C.: January 27, 2010. English Language Learning: Diverse Federal and State Efforts to Support Adult English Language Learning Could Benefit from More Coordination. GAO-09-575. Washington: D.C.: July 29, 2009. Teacher Preparation: Multiple Federal Education Offices Support Teacher Preparation for Instructing Students with Disabilities and English Language Learners, but Systematic Departmentwide Coordination Could Enhance This Assistance. GAO-09-573. Washington, D.C.: July 20, 2009. Teacher Quality: Sustained Coordination among Key Federal Education Programs Could Enhance State Efforts to Improve Teacher Quality. GAO-09-593. Washington, D.C.: July 6, 2009. Teacher Quality: Approaches, Implementation, and Evaluation of Key Federal Efforts. GAO-07-861T. Washington, D.C.: May 17, 2007. Higher Education: Science, Technology, Engineering, and Mathematics Trends and the Role of Federal Programs. GAO-06-702T. Washington, D.C.: May 3, 2006. Higher Education: Federal Science, Technology, Engineering, and Mathematics Programs and Related Trends. GAO-06-114. Washington, D.C.: October 12, 2005. Special Education: Additional Assistance and Better Coordination Needed among Education Offices to Help States Meet the NCLBA Teacher Requirements. GAO-04-659. Washington, D.C.: July 15, 2004. Special Education: Grant Programs Designed to Serve Children Ages 0-5. GAO-02-394. Washington, D.C.: April 25, 2002. Head Start and Even Start: Greater Collaboration Needed on Measures of Adult Education and Literacy. GAO-02-348. Washington, D.C.: March 29, 2002. Bilingual Education: Four Overlapping Programs Could Be Consolidated. GAO-01-657. Washington, D.C.: May 14, 2001. Early Education and Care: Overlap Indicates Need to Assess Crosscutting Programs. GAO/HEHS-00-78. Washington, D.C.: April 28, 2000. Education and Care: Early Childhood Programs and Services for Low- Income Families. GAO/HEHS-00-11. Washington: D.C.: November 15, 1999. Federal Education Funding: Multiple Programs and Lack of Data Raise Efficiency and Effectiveness Concerns. GAO/T-HEHS-98-46. Washington, D.C.: November 6, 1997. Department of Education: Information on Consolidation Opportunities and Student Aid. GAO/T-HEHS-95-130. Washington, D.C.: April 6, 1995. Multiple Teacher Training Programs: Information on Budgets, Services, and Target Groups. GAO/HEHS-95-71FS. Washington, D.C.: February 22, 1995. Department of Education: Opportunities to Realize Savings. GAO/T-HEHS-95-56. Washington, D.C.: January 18, 1995. Early Childhood Programs: Multiple Programs and Overlapping Target Groups. GAO/HEHS-95-4FS. Washington, D.C.: October 31, 1994. Multiple Employment and Training Programs: Providing Information on Colocating Services and Consolidating Administrative Structures Could Promote Efficiencies. GAO-11-92. Washington, D.C.: January 13, 2011. Multiple Employment and Training Programs: Funding and Performance Measures for Major Programs. GAO-03-589. Washington, D.C.: April 18, 2003. Multiple Employment and Training Programs: Overlapping Programs Indicate Need for Closer Examination of Structure. GAO-01-71. Washington, D.C.: October 13, 2000. Department of Labor: Rethinking the Federal Role in Worker Protection and Workforce Development. GAO/T-HEHS-95-125. Washington, D.C.: April 4, 1995. Multiple Employment Training Programs: Information Crosswalk on 163 Employment and Training Programs. GAO/HEHS-95-85FS. Washington, D.C.: February 14, 1995. Multiple Employment Training Programs: Major Overhaul Needed to Create a More Efficient, Customer-Driven System. GAO/T-HEHS-95-70. Washington, D.C.: February 6, 1995. Multiple Employment Training Programs: Major Overhaul Needed to Reduce Costs, Streamline the Bureaucracy, and Improve Results. GAO/T-HEHS-95-53. Washington, D.C.: January 10, 1995. Multiple Employment Training Programs: Basic Program Data Often Missing. GAO/HEHS-94-239. Washington, D.C.: September 28, 1994. Multiple Employment Training Programs: How Legislative Proposals Address Concerns. GAO/T-HEHS-94-221. Washington, D.C.: August 4, 1994. Multiple Employment Training Programs: Overlap Among Programs Raises Questions About Efficiency. GAO/HEHS-94-193. Washington, D.C.: July 11, 1994. Multiple Employment Training Programs: Major Overhaul Is Needed. GAO/T-HEHS-94-109. Washington, D.C.: March 3, 1994. Multiple Employment Training Programs: Conflicting Requirements Underscore Need for Change. GAO/T-HEHS-94-120. Washington, D.C.: March 2, 1994. Multiple Employment Training Programs: Most Federal Agencies Do Not Know If Their Programs Are Working Effectively. GAO/HEHS-94-88. Washington, D.C.: March 2, 1994. Multiple Employment Training Programs: Overlapping Programs Can Add Unnecessary Administrative Costs. GAO/HEHS-94-80. Washington, D.C.: January 28, 1994. Multiple Employment Training Programs: Conflicting Requirements Hamper Delivery of Services. GAO/HEHS-94-78. Washington, D.C.: January 28, 1994. Multiple Employment Programs: National Employment Training Strategy Needed. GAO/T-HRD-93-27. Washington, D.C.: June 18, 1993. Multiple Employment Programs. GAO/HRD-93-26R. Washington, D.C.: June 15, 1993. Multiple Employment Programs. GAO/HRD-92-39R. Washington, D.C.: July 24, 1992. List of Selected Federal Programs That Have Similar or Overlapping Objectives, Provide Similar Services, or Are Fragmented Across Government Missions. GAO-11-474R. Washington, D.C.: March 18, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-441T. Washington, D.C.: March 3, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. At-Risk and Delinquent Youth: Multiple Programs Lack Coordinated Federal Effort. GAO/T-HEHS-98-38. Washington, D.C.: November 5, 1997. At-Risk and Delinquent Youth: Multiple Federal Programs Raise Efficiency Questions. GAO/HEHS-96-34. Washington, D.C.: March 6, 1996. Federal Reorganization: Proposed Merger’s Impact on Existing Department of Education Activities. T-HEHS-95-188. Washington, D.C.: June 29, 1995. Federal Reorganization: Congressional Proposal to Merge Education, Labor, and EEOC. GAO/HEHS-95-140. Washington, D.C.: June 7, 1995. Government Restructuring: Identifying Potential Duplication in Federal Missions and Approaches. GAO/T-AIMD-95-161. Washington, D.C.: June 7, 1995. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses GAO's recent report entitled "Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue." This report delineates dozens of areas across government where fragmentation, overlap, and potential duplication merit the attention of Congress and the Administration spanning a range of government missions: agriculture, defense, economic development, energy, general government, health, homeland security, international affairs, and social services. The report also describes other opportunities for federal departments, agencies or Congress to consider taking action on that could either reduce the cost of government operations or enhance revenue collections for the Treasury. Taking actions on these opportunities and reducing or eliminating duplication, overlap, or fragmentation could save billions of tax dollars annually and help agencies provide more efficient and effective services. With regard to issues of specific interest to this Committee, GAO found fragmentation, overlap, and potential duplication in the areas of federal programs to improve teacher quality and employment and training. Each of these areas is characterized by a large number of programs with similar goals, beneficiaries, and allowable activities that are administered by multiple federal agencies. Fragmentation of programs exists when programs serve the same broad area of national need but are administered across different federal agencies or offices. Program overlap exists when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. Overlap and fragmentation among government programs or activities can be harbingers of unnecessary duplication. Given the challenges associated with fragmentation, overlap, and potential duplication, careful, thoughtful actions will be needed to address these issues. This testimony draws upon the results of our recently issued report and will address what is known about fragmentation, overlap, and potential duplication among federal teacher quality and employment and training programs. It also addresses options for Congress to help minimize fragmentation, overlap, and potential duplication and how it can use recent legislative tools to improve the effectiveness and efficiency of federal programs. 1) We identified 82 distinct programs designed to help improve teacher quality administered across 10 federal agencies, many of which share similar goals. However, there is no governmentwide strategy to minimize fragmentation, overlap, or potential duplication among these many programs. The fragmentation and overlap of teacher quality programs can frustrate agency efforts to administer programs in a comprehensive manner, limit the ability to determine which programs are most cost effective, and ultimately increase program costs. Congress could address these issues through legislation, particularly through the pending reauthorization of the Elementary and Secondary Education Act of 1965, and the Department of Education (Education) has already proposed combining 38 programs into 11 programs in its reauthorization proposal. (2) We found that 44 of the 47 employment and training programs we identified overlap with at least one other program in that they provide at least one similar service to a similar population. To facilitate further progress by states and localities in increasing administrative efficiencies, we recommended that the Secretaries of Labor and Health and Human Services (HHS) work together to develop and disseminate information that could inform such efforts. As part of its proposed changes to the Workforce Investment Act, the Administration proposes consolidating nine programs into three. In addition, the budget proposal would transfer the Senior Community Service Employment Program from Labor to HHS. (3) Sustained congressional oversight is pivotal in addressing these issues. Specifically, this Committee can look for opportunities to enhance program evaluations and performance information, foster coordination and strategic planning for program areas that span multiple federal agencies, and consolidate existing programs or coordinate service delivery. |
The Postal Service is an independent establishment of the executive branch mandated by the Postal Reorganization Act of 1970 to provide postal services to the nation. The Service’s customers are provided, regardless of where they live, with postal services that include mail delivery at no charge and access to postal retail services. The act also required the Service to be self-supporting from postal revenues and attempted to eliminate legislative, budgetary, and financial policies that were inconsistent with efficient modern management and business practices. Providing the postal services required by the Postal Reorganization Act requires a significant transportation and facility network. To support this network, the Service spent approximately $2.3 billion on fuel in 2006. The majority of the Service’s fuel costs—over $1.7 billion—was used for transportation-related fuel. Figure 1 summarizes key operating statistics for the Service’s transportation network. Other includes alternative fuels such as biodiesel, compressed natural gas (CNG), ethanol, electricity, and liquefied petroleum gas. Other includes rail and water transportation. As shown in figure 1, the Service relies heavily on highway and air transportation; diesel, gasoline, and jet fuel, all of which are petroleum- based fuels; and contractors to provide transportation-related services. The Service uses its own vehicle fleet as well as other personal and contractor-owned vehicles to carry out highway mail delivery and transportation services. Information on these methods is provided below. Key operating statistics for the Service’s owned fleet are provided in figure 2: Postal-owned vehicles are typically fueled in one of three ways: (1) at a retail fuel station using a Postal Service-issued purchasing card, (2) at a Postal facility using an on-site bulk-fuel tank, and (3) at a Postal facility using a supplier’s fuel truck. 1. Retail Fuel for the Postal-Owned Fleet: The majority of Postal-owned mail delivery vehicles are fueled primarily at retail fueling stations nationwide. Purchases are made using a Postal-issued purchasing card—the Voyager card. Under this program, which is administered through GSA, a purchase card is assigned to a designated Postal Service vehicle and can be used at over 200,000 retail locations throughout the United States. The benefits to using this card, which will be discussed later, include qualifying for rebates and volume discounts. 2. Bulk Fuel for the Postal-Owned Fleet: Postal facilities with fuel storage tanks can provide on-site fuel for Postal-owned vehicles. Fuel for these tanks is typically purchased through agreements with the Department of Defense’s Energy Support Center (DESC). Under these agreements, DESC aggregates the Service’s fuel requirements with other federal agencies and then solicits offers from private fuel suppliers. After a contract is reached between the Service (via DESC) and the private fuel supplier, the fuel supplier is responsible for delivering fuel to the Postal fuel tanks. The Service also utilizes a limited amount of specialized bulk fuel contracts and agreements. In typically smaller, more remote locations where DESC fuel is not available and a fueling tank is located on-site at a Postal facility, the Service enters into Basic Pricing Agreements (BPA). The Service enters into a BPA with a fuel supplier to provide fuel for the on-site Postal tank. The Service spent nearly $800,000 in BPAs in 2006. Postal contracts are another specialized fueling method, which are used primarily during peak seasons when demand for postal services increases beyond normal operating capacities. The Service spent nearly $1.6 million on these contracts in 2006. 3. Mobile Refueling for the Service’s Fleet: Mobile refueling is a method of fuel procurement used to refuel the Service’s internal fleet vehicles during non-delivery hours and is used primarily in the Southeastern United States. Mobile refueling occurs on-site at Postal Service facilities, where its delivery vehicles are filled from mobile bulk tanks by contractors. A Voyager fuel card is used for these transactions. This is the most expensive refueling option primarily because of the additional service requirements. Table 2 summarizes fuel expenses for the Postal-owned fleet. The Service stated that the majority of its nearly 126,600 rural mail carriers use their own personal vehicles to carry out their postal responsibilities. Because these carriers do not operate Postal-owned vehicles, they are not eligible to use the Voyager fuel card for refueling (the Voyager card system is used for the over 20,000 Postal-owned vehicles operated by rural mail carriers). The rural mail carriers not in the Voyager program purchase fuel for their vehicles at retail fueling locations and then are reimbursed as part of the contractually agreed upon Equipment Maintenance Allowance (EMA). In addition to fuel, the EMA also includes certain vehicle maintenance and repair costs. The most recent EMA was set at $0.52 per mile for routes over 40 miles long (routes under 40 miles are paid a higher EMA per mile). In 2006, the Service spent nearly $163 million in fuel on these rural routes. The Service also utilizes contracted vehicle fleet services to carry out some of its surface transportation needs. These contractors range from major trucking companies that provide mail transportation between the Service’s larger facilities to smaller box contractors who provide home mail delivery in rural areas. As shown in table 3, the Service spent $648 million through (1) retail fuel, (2) quarterly adjustments for fuel purchases for its contracted vehicle fleet, and (3) bulk fuel in 2006. The following is additional information on each of the contracted vehicle fleet categories: Retail Fuel for the Contracted Vehicle Fleet: Retail fuel purchased for the contracted vehicle fleet is bought in the same manner as retail fuel for the Postal-owned fleet using the Voyager fuel purchasing card. Again, like the Postal-owned fleet, Voyager fuel cards are assigned to individual vehicles and can be used at over 200,000 retail locations throughout the United States. The Service has aimed to increase the number of highway contractors using the Voyager fuel card to purchase retail fuel, as the Service can secure rebates and discounts with a great number of Voyager card transactions. Quarterly Adjustments for the Contracted Vehicle Fleet: The Service utilizes quarterly adjustments for highway contractors who do not qualify for the Voyager card program. Contractors may not qualify for the program due to reasons such as an inability to reasonably estimate the annual gallons used during a contract or that personal vehicles are used instead of Postal-owned vehicles. According to Postal Service officials, since these personal vehicles are not dedicated to Postal transportation, there is no reliable way to separate out the gallons used for Postal-related work versus the gallons used for personal travel. Under this adjustment system, gallon projections are negotiated between the Service and the individual contractor. The contractor is responsible for the initial fuel payment that he/she makes at the fuel pump. The Service then reimburses the contractor for these fuel costs based on an indexing system that adjusts for changes in fuel prices on a quarterly basis using a Department of Energy fuel index. Compensation rates are set at the beginning of the quarter and readjusted every 3 months based on the average price of fuel at the beginning and the end of the quarter. Highway contractors have raised concerns about the 3-month lag between adjustments and have stated that they would prefer a monthly adjustment. The Service is considering converting to a monthly DOE fuel indexing system to more accurately reflect actual fuel prices. Bulk Fuel for the Contracted Vehicle Fleet: Under the Service’s contracted bulk fuel purchasing program, the Service acts as a contract administrator between fuel suppliers and highway contractors who qualify (e.g., they are required to provide and maintain their own bulk fuel storage tanks). The Service combines the volume of its contractor bulk purchases and solicits and awards agreements with fuel suppliers. Fuel suppliers directly bill the highway contractors for the fuel, and the Service subsequently reimburses the highway contractors for the fuel used in fulfilling their obligations with the Service. In 2006 there were 83 locations nationwide where contractor bulk fuel tanks are located, with Texas, Michigan, and California having the most sites—8, 6, and 6 respectively. The Service relies solely on contractors for its air transportation services, and as part of those contracts, estimated that it spent over $551 million on fuel in 2006. The majority of this annual cost is for jet fuel, which is included in the contracts and adjusted monthly according to changes in the Producer Price Index. The Service also had limited fuel spending, about $14 million, on rail and water transportation. As illustrated in the examples above, the Service uses a variety of fuel procurement methods for transportation fuels. According to Service officials, they select procurement methods depending on various factors such as price, availability, supply, and location. A Service official stated that the various fuel procurement methods in order of cost-effectiveness are: 1. Bulk fuel purchased through DESC is the least expensive method because of DESC’s ability to aggregate purchases, and the fuel is purchased without any taxes included. 2. Bulk fuel purchased for the highway contract routes because it is bought wholesale. 3. Voyager retail purchases because of the associated volume discounts, rebates, and state excise tax exemptions. 4. Fuel purchased as part of the rural carrier Equipment Maintenance Allowance because it is tied to a contractually-agreed upon a mileage reimbursement. 5. Mobile refueling, which tends to be $0.30 to $0.40 more expensive per gallon than fuel bought at a retail station due to additional costs associated with having the fuel delivered to Postal facilities. The remaining portion of the Service’s $2.3 billion fuel costs—about $610 million—was used to heat and operate the over 34,000 facilities it occupies nationwide. While the majority of this expense was for electricity, other fuels such as natural gas, heating oil, and propane also were used (see table 6). Postal Service officials stated that most of its 34,000 facilities it occupies are post offices under 2,500 square feet, and that the majority of its energy use is in its larger processing plants. Additional information on the Service’s efforts to control facility-related fuel cost is provided in the following section. While recent fuel cost increases have pressured the Service’s financial condition, the Service was able to overcome these increases and achieve net income from operations. Rising fuel prices—particularly for gasoline, diesel, jet fuel, and natural gas—have been the primary driver of the Service’s recent transportation and facility fuel costs increases. The Service remains highly vulnerable to fuel price fluctuations, due in part to its fuel purchasing process, which involves buying fuel as it is needed, typically at retail locations. The Service is challenged by the need to meet its universal service requirements and its inability to use fuel surcharges. Rising transportation costs accounted for roughly 18 percent of the operating expense increase in 2006—largely due to rising fuel costs— while compensation and benefit growth accounted for 68 percent of this increase. Growth in compensation and benefit costs was also tied to fuel costs, which are included in the calculation of cost-of-living adjustments contained in union contracts. The Service was able to absorb these cost pressures through cost containment efforts inside and outside of the fuel program, as well as from increased revenues from the January 2006 rate increase, allowing it to achieve a positive net income from operations. Over the last 2 years, the Service has experienced a significant escalation in its fuel costs (see table 7). Fuel costs for each of the Service’s transportation areas have continued to increase over the last 2 years (see table 8). Highway and air transportation costs continue to be responsible for the majority of this increase. While some of the fuel cost increase can be attributed to volume and delivery point increases, Postal officials stated that rising fuel prices were the primary driver behind this cost increase. Postal Service transportation relies heavily on diesel, gasoline, and jet fuel, and over the course of the last 3 years, prices for these fuel types have generally increased (see fig. 6). Analysis of the Postal-owned fleet’s fuel cost and consumption history contained in GSA’s annual Federal Fleet reports confirms that price increases, rather than consumption, drove fuel cost increases. As shown in figure 7, fuel costs for the Postal-owned vehicle fleet increased 19 percent from 2005 to 2006, while consumption decreased by 5 percent. The Service has cost reduction, savings, and avoidance programs in place that have helped it offset these rising fuel costs. Some of these programs, such as the Voyager program, have been in place for a few years, but others have developed more recently. Descriptions of some of the Service’s cost-savings initiatives are provided below: Tax exemption and recoupment: Fuel purchased by Service employees for Postal-owned vehicles at retail fueling stations is exempt from state taxes where allowed by law. Largely through the Voyager card program, which began in 2000, the Service has been able to more effectively apply its exemption from paying the taxes at the pump and recoup tax payments where taxes either were inadvertently paid or the tax exemption was not allowable at the pump according to the applicable state law. Highway contractor bulk fuel: Savings are derived in one of two ways: (1) the savings achieved when getting a contractor into the bulk fuel program—having them purchase fuel in bulk is less expensive for the Service compared to purchasing it at retail; and (2) the costs that are avoided when the Service finds that a highway contractor uses fewer gallons than what is listed in its contractual agreement—the Service does not pay for the gallons that are not used, and thus avoids that fuel cost. Highway contractor retail: Savings are achieved in one of two ways, the first of which is through a contract adjustment that occurs when the Service brings highway contractors into the Voyager card program. Under the previous system, these contractors claimed gallons as part of their fuel expense line. The Service claims the gallons that they are no longer paying for as part of this line item as savings. The second cost containment strategy is similar to that for the highway contractor bulk fuel program, in that the Service claims cost avoidance when highway contractors using the Voyager card use fewer gallons than what was originally estimated in their contractual agreement. Voyager rebates and discounts: The Service is able to achieve cost reductions in two ways for its Postal-owned vehicle fleet fuel purchases made using the Voyager card. First, the Service is able to qualify for rebates from the GSA portfolio of government-sponsored credit cards through the use of the Voyager card. These rebates are based on the volume of fuel purchases and the promptness of the Service’s repayment. Second, the Service is able to secure discounts with participating retailers due to the large amounts of fuel that are needed for use by the Postal fleet. Holiday fuel savings: During the peak holiday season, the Service contracts separately for the fuel needed for its dedicated air network. In doing so, the Service consolidates fuel volumes to gain a lower price. The savings amount reflects the price difference. The table below shows that the Service reported transportation-fuel related savings of over $53 million in 2006, with the majority of these savings achieved through the tax exemption and recoupment efforts. The Service’s facility-related fuel costs have also increased recently. Spending on these fuels—which include electricity, natural gas, heating oil, propane, and steam—increased each of the last 2 years (see table 10). While a Service facility official stated that consumption of utilities and heating fuel may have increased due to operational requirements such as new equipment and safety and security concerns, the official attributed most of the increase due to rising prices. In particular, the expenses for natural gas were responsible for the largest percentage growth. For example, figure 9 shows that the price for natural gas peaked in November 2005. To help offset these rising costs, the Service has reported achieving nearly $18 million in costs savings in 2006 from various facility fuel-related initiatives. As shown in table 11, most of these cost savings were achieved in one of two ways: (1) SES Contracts—a type of public-private partnerships used to promote energy conservation and achieve cost savings that will be explained in more detail in the subsequent section—or (2) aggregated utility purchases. In select locations (e.g., within a specific local utility service area or within a particular state), the Service has achieved economies of scale and lower rates by aggregating electricity and natural gas purchases. Outside of these two main areas, the Service has achieved savings through other actions such as installing occupancy sensor light switches, which the Service reported saved over $45,000 a year. Table 12 shows that the Service’s facility-fuel related cost-savings targets have been met and exceeded each of the last 3 years. A Postal Service energy official stated that the targets decreased over that time due to sustained volatility in the electric and natural gas markets as well as declining opportunities within these deregulated markets. Recent fluctuations in transportation and facility fuel prices have revealed the Service’s vulnerability to fuel price volatility. The Service remains highly vulnerable to fuel price fluctuations, due in part to its fuel purchasing process. A fuel procurement official at the Service stated that price does not factor into a reduced consumption of fuel and provided the following example—if the Service needs 1 million gallons of fuel to meet its universal service requirements, it will need that amount regardless of whether the fuel price is $2 a gallon or $3 a gallon. Furthermore, the Service does not have fuel storage facilities available to purchase large quantities of fuel when the price is lower and hold them in reserve. The Service is also vulnerable to rising fuel prices through the cost-of-living adjustment calculation used in its union contracts. These COLAs are based on changes in the Consumer Price Index which contain a fuel component. Another key component of its fuel program that increases vulnerability is that other businesses may use fuel surcharges to help offset rising fuel prices, but the Service can not. As such, the Service must absorb cost increases due to growing prices while meeting its universal service requirements. While fuel cost increases pressured overall fuel-related transportation and facility costs, the Service was able to still achieve positive financial results in 2006. Fuel expense is a key component for the following transportation and facility cost categories included in its monthly Financial and Operating Statements: Transportation: The fuel component of the Transportation category includes gasoline, diesel, and other transportation-related fuels used to support the air, rail, and water transportation networks, as well as a significant portion of its highway transportation needs. Fuel expenses accounted for nearly 21 percent of these Transportation expenses in 2006. The non-fuel component includes related contractual payments and terminal dues. Vehicle Maintenance Services: The fuel component includes some fuel purchased at retail locations. Fuel expenses accounted for about 50 percent of Vehicle Maintenance Services expenses in 2006. The non-fuel component is the expenses associated with maintaining Postal vehicles (e.g., oil changes, repairs, etc.) Utilities and Heating Fuel: The fuel component is the fuel used to heat and operate Postal facilities (e.g., electricity, natural gas, heating oil, etc.). Fuel expenses accounted for over 90 percent of Utility and Heating Fuel expenses in 2006. The non-fuel component is expenses for sewer services and trash removal. Rural Carrier Equipment Maintenance Allowance (EMA): The Service reimburses rural carriers outside of the Voyager program for fuel expenses as part of the EMA. Fuel expenses accounted for nearly 26 percent of the EMA in 2006. The vehicle equipment and maintenance expense are the non-fuel components of the EMA. Table 13 shows that costs have continued to increase for the three major fuel-related line-items, all of which were over budget in 2006. Rising fuel prices were the significant driver of the recent cost growth in these categories, and why the Service stated that it was unable to offset Transportation cost increases. In setting the budgets for 2006, the Service set aside funding in the event that fuel prices or other unplanned events had an adverse impact on Postal finances. As these officials were monitoring the impact of rising fuel costs throughout the year and seeing that costs for the fuel-related costs components were exceeding budgeted targets, the Service had to utilize these reserve funds and make budget adjustments nationwide. Similar cost growth also occurred for the Service’s overall operating expenses. The Service’s operating expenses grew by $3.4 billion in 2006, which was the third consecutive year of growth. While rising transportation costs accounted for roughly 18 percent of the operating expense increase in 2006—largely due to rising fuel costs—compensation and benefit growth accounted for 68 percent of this increase (see table 14). Postal officials attributed a portion of the increase in compensation and benefits to Cost-of-Living Adjustments (COLA) tied to increases in fuel costs. This expense growth, however, was (1) somewhat tempered by the Service’s ability to achieve productivity improvements throughout the year and (2) offset by the growth in revenues largely from the January 2006 rate increase. In addition to the $71 million in costs avoided through the previously mentioned fuel-related initiatives, the Service reported avoiding over nearly $185 million due to other cost savings and productivity improvement efforts, which included various operational efficiencies as well as automation and equipment enhancements. Operating revenue growth was the primary reason behind the Service’s financial success in 2006. These revenues grew by 4.0 percent ($2.7 billion) largely due to the January 2006 rate increase. This increase followed operating revenue growth in the previous 2 years, largely due to growing mail volumes. In each of the last 3 years, the Service was able to report net income from operations. In 2004 and 2005, the Service benefited from a transitory boost provided by 2003 pension reform legislation that changed its pension obligations. As table 15 shows, the Service achieved net incomes of $3.1 billion and $1.4 billion during that time. This past year was the first in which the Service was required to make annual escrow payments as part of the 2003 pension legislation. Although the Service’s net income was $900 million, the Service reported a $2.1 billion overall deficiency after the $3.0 billion escrow payment. The Service borrowed $2.1 billion, in part to cover the required escrow payment. The Postal Accountability and Enhancement Act enacted in December 2006 repealed the escrow requirement and designated that funds would instead be allocated to prefund retiree health benefits. The Service has taken actions in certain areas, such as implementing its Voyager fuel card program, bulk purchasing, and SES contracts, that have improved its fuel procurement and consumption, as well as its ability to manage fuel cost and risks. Some of these actions appear generally consistent with practices (1) advocated by leading organizations related to aggregating purchases, improving organizational structure, and utilizing public-private partnerships and (2) federal conservation requirements contained in EPAct. We also identified areas where more actions could be taken to identify further cost-saving opportunities and meet updated federal fuel consumption requirements related to reducing reliance on petroleum-based fuels. For example, the Service does not have information on the fuel consumed as part of its air transportation contracts or fuel consumed as part of heating and operating the majority of its over 34,000 occupied facilities. This lack of information is inconsistent with tracking and monitoring practices advocated by leading organizations in that it inhibits the Service’s understanding of the extent to which consumption is changing, how consumption has impacted overall fuel costs, and potential opportunities to reduce costs and/or consumption. Furthermore, financial and operational limitations related to alternative fuel usage may limit the Service’s ability to reduce reliance on petroleum-based fuels as required by EPAct 2005. Addressing these issues, as well as continuing to look for additional cost-saving and risk mitigation opportunities, will be important to assist the Service in managing its vulnerability to fuel price volatility. Based on information gathered from fuel officials at DOD, GSA, and DOE; discussions with an expert on purchasing price-volatile commodities; and our past work, we identified key practices advocated by leading organizations that can be applied to the Service’s fuel-related activities. We also reviewed the federal energy conservation requirements applicable to the Service as part of EPAct 1992 and 2005. We grouped these practices into two major areas: (1) procurement and (2) consumption. We have issued a number of reports discussing the actions that leading private-sector organizations have taken to improve their purchasing, and how some of these actions can be effective for federal agencies. We have also issued a framework for assessing the acquisition function at federal agencies. Many of these actions we reported on revolve around implementing a strategic approach to procurements—one that includes the following key practices/principles: Aggregating purchases to leverage buying power and size: Organizations should look for opportunities to aggregate purchases which would allow them to leverage buying power and size and may result in better prices, due to volume discounts, more stable prices, and improved service. In a 2003 report, we noted that leading private-sector organizations reported saving hundreds of millions of dollars due to leveraging their spending. Furthermore, vehicle fleet and facility energy managers from the General Services Administration and fuel procurement specialists at DESC stated that aggregating purchases has resulted in better prices and service from fuel and energy suppliers. Enhancing organizational structure: We reported that leading companies found it necessary to change their business processes, organizational structure, and employee roles and responsibilities to effectively manage and coordinate their purchases. Leading organizations provide clear and strong leadership through such mechanisms as establishing goals and prioritizing initiatives that will enhance accountability for performance. We have also reported on the importance of establishing commodity- specific managers. Considering the fluctuations of fuel and utility prices, it is important to have officials who are consistently monitoring and tracking the market changes for these goods to make informed purchasing decisions. Use public/private partnerships: We have reported that leading organizations have found that more cooperative business relationships with suppliers have improved their ability to respond to changing business conditions and have led to lower costs. Over 20 years ago, federal government agencies were encouraged to utilize an alternative source of funding investments aimed at promoting energy-efficient projects. Under these projects, a private contractor would identify, design, install, and finance energy conservation measures in federal buildings in exchange for a share of the resultant energy cost savings that would be paid back to the contractor over a set period of time. These alternative funding mechanisms take advantage of public/private partnerships to provide incentives for cost savings and reduce energy consumption. These contracts have been advocated by the President and the Department of Energy as an effective energy conservation measure, and EPAct 2005 recently extended the authority for these financing mechanisms through 2016. Tracking and monitoring: A key principle applied by leading companies is obtaining improved knowledge on what is being spent by an organization. This knowledge is gained through the implementation of processes and systems to collect, maintain, and analyze data. This data would provide the organization the ability to track and monitor performance over time as well as to identify cost saving opportunities. We have reported on how leading private-sector companies have focused on gaining knowledge about how much is being spent for what goods and services, who are the buyers, and who are the suppliers, thereby identifying opportunities to leverage buying, save money, and improve performance, and how these principles can apply to federal entities. A key benefit derived from tracking and monitoring is gaining an understanding of an organization’s fuel consumption: what types of fuel are being consumed, how much, how these fuels are used (i.e., for transportation or facilities), and when they are needed (i.e., throughout the year or seasonally), etc. EPAct 2005 contained specific provisions aimed at improving the tracking and monitoring of energy usage at federal facilities. Agencies are to begin taking actions to implement electric metering systems throughout their facilities, with the goal of having this technology in all federal buildings by October 1, 2012. The federal government, through legal requirements contained in EPAct 1992 and 2005 and other guidance, continues to promote actions aimed at reducing federal fuel consumption. EPAct 1992 and 2005 established federal energy conservation efforts that target, among other things, the need for federal agencies to take steps to reduce reliance on and use of petroleum-based fuels. Key provisions in EPAct 1992 were aimed at reducing the nation’s dependence on foreign oil by promoting alternative fuel vehicles (AFV) in the federal government’s various vehicle fleets and fuel diversification. EPAct 1992 required federal agencies, including the Service, to increase their AFV purchases when buying new vehicles and EPAct 2005 details requirements for alternative fuels to be used in these vehicles. EPAct 2005 also sought to set conservation goals for all federal agencies, including the Postal Service. Provisions within EPAct related to facility energy consumption include: Federal agencies are to reduce their annual energy consumption by 2 percent per year from 2006 to 2015, based on the baseline year of 2003, resulting in an overall energy reduction of 20 percent by 2015; New federal buildings must be designed to achieve energy consumption levels that exceed industry or international standards by at least 30 percent, provided the standards would be life-cycle cost-effective for the facility. In addition to these legal requirements, other federal guidance exists to reduce fuel consumption. For example, in January 2007, President Bush issued Executive Order 13423 to strengthen federal agencies’ environmental, energy, and transportation management. Major provisions of this order included: Vehicles: Use certain hybrid vehicles when commercially available at a reasonable cost. Petroleum conservation: Reduce total petroleum consumption in vehicle fleets by 2 percent annually through 2015. Alternative fuel use: Increase alternative fuel consumption by 10 percent annually. Energy efficiency: Improve energy efficiency by 30 percent by 2015. Although the Service is not subject to the executive order, this federal policy provides guidance on goals and practices that could be replicated to improve transportation and facility energy efficiency. DOE has also provided guidance aimed at improving vehicle fleet fuel efficiency and, in general, reducing petroleum-based fuel consumption. Some examples of these practices include: observing posted speed limits; removing excess weight from the vehicle; keeping tires properly inflated; and performing regularly-scheduled preventative maintenance. We also reported in 2003 that the use of bypass filters in conjunction with traditional oil filters are another option to improve vehicle fleet efficiency by substantially reducing the number of oil changes for certain federal agencies, including the Service. We assessed the Service’s actions to control fuel costs and mitigate fuel cost risk against these leading practices and EPAct requirements. The Service’s actions generally appear to be consistent with the leading practices for aggregating purchases, organizational change, and utilizing public-private partnerships. Furthermore, the Service has generally complied with the legal provisions contained in EPAct 1992 regarding the purchase of alternative fuel-capable vehicles. Issues remain, however, related to tracking and monitoring fuel consumption data and reducing reliance on petroleum-based fuels that may hinder the Service’s ability to achieve cost savings and/or meet updated federal requirements contained in EPAct 2005. The Service’s actions related to aggregating its purchases and leveraging its buying power appear consistent with practices advocated by leading organizations. As table 16 illustrates, the Service has implemented multiple actions aimed at aggregating fuel purchases, both internal and external to the Postal Service. The changes that the Service has made to its organizational structure appear consistent with leading practices because it reorganized to include commodity (fuel) specific experts and established a leadership position to develop and coordinate the implementation of the Service’s energy strategies. In 2002, the Service created its fuel purchasing organization as part of its efforts to incorporate Supply Chain Management principles. A 2001 report by the Service’s Office of Inspector General (OIG) recommended that the Service reexamine its fuel management systems. A consultant-produced Fuel Management Business Plan study completed in response to the OIG audit recommended the Service centralize its procurement and management of fuels. The Service thus created the Transportation Asset Management group, which is dedicated to managing and conducting the Service’s transportation-related fuel purchasing activity—for both the Service and its transportation contractors—as well as for heating oil. Although heating oil is used in facility operations, since it is a petroleum-based fuel the Service included it in the Transportation Asset Management group. During 2001, a procurement team focused on the utilities was also created. The Office Products and Utilities Category Management Center was developed to manage utility procurement for Postal facilities throughout the United States. The main energy sources this group is responsible for are electricity, natural gas, water, and steam. This group also manages all of the Service’s SES contracts with private contractors. The Service’s recent organizational changes related to its energy management also appear consistent with leading practices related to enhancing leadership and establishing an organizational strategy. In July 2006, the Service appointed an Executive Director for Energy Initiatives. The current Executive Director stated that her responsibilities will include: Developing and managing the Service’s energy management strategy. The Executive Director anticipates completing the Service’s energy management strategic plan by mid-2007, which is expected to focus on three key areas: (1) fuel purchasing using supply management, (2) fuel demand for the Service’s facilities and its transportation networks, and (3) risk management. Serving as the Service’s primary point of contact for all other government agencies—federal, state, and local—and the private sector regarding the Service’s fuel and energy usage. The relationships built between the Service, other government agencies, and private-sector organizations are designed to keep the Service apprised of any opportunities or leading practices that exist to reduce overall energy consumption. The Service’s continued utilization of public-private partnerships through Shared Energy Savings contracts appear consistent with some elements of leading practices and with federal policies in this area. These contracts are an alternative source of funding for energy-efficient investments. Under these contracts, a private entity (typically an energy company) would fund the initial installation of an energy savings project at a Postal facility. Energy officials at the Service stated that it has advocated the use of these contracts since 1992 as an effective alternative financing method and energy conservation program, and that these projects are an investment aimed at reducing consumption. The savings achieved as a result of these projects would initially be used to pay back the private entity for the installation costs—typically over a 10-year period. According to the Service, savings could accrue (1) at the end of this pay back period, (2) when the outstanding balance is paid prior to the contract’s expiration by the Service using funding from other areas, or (3) during the payback period as consumption is being reduced, actual energy prices exceed the forecasted prices. Table 17 summarizes the Service’s SES contract program, while table 18 shows that many 2006 SES projects are occurring at sites in the Pacific and Southeast areas. Some of the Service’s SES projects have been nominated for DOE’s Federal Energy Efficiency Awards, and DOE has recognized that benefits have been derived from the Service’s contracts. Our past work on similar energy savings contracts for other federal agencies reaffirmed that these types of contracts can offer various benefits including energy savings and more reliable equipment, but noted attention is needed when evaluating the contracts expected cost savings. We also noted that financing energy savings projects through these alternative funding mechanisms may be more expensive than up-front funding and that the performance of these third-party participants should be carefully monitored and verified. The Service’s limited tracking and monitoring of fuel consumption information for the majority of its fuel spending is inconsistent with leading practices (see table 19). This lack of information results in the Service not having the necessary fuel information to gain a complete understanding of the extent to which consumption is changing, how consumption has impacted overall fuel costs, and identify potential opportunities to reduce consumption. On the transportation side, the Service has no mechanisms or systems in place to monitor fuel usage, except for fuel purchases through its Voyager and holiday jet fuel programs (these purchases combined account for about 35 percent of its annual transportation-related fuel expenses). For example, the Service does not have consumption information for its nearly 55,000 delivery routes served by its rural carriers who use their own personal vehicles. The Service stated that it estimates fuel usage in some of these instances. Furthermore, the air transportation contracts pose greater difficulties in this area because fuel purchases are tied to a contract measure such as cubic feet or pounds of cargo. These measures are needed to estimate fuel consumption for the Postal-related cargo because these flights may not be dedicated to Postal Service transportation. The Service also does not centrally track the amount of fuel used to heat and operate its nationwide facility network. For example, the Service currently has metering equipment at only 25 of its over 34,000 facilities. The Service tracks and monitors the costs that are paid for its electricity, natural gas, and heating oil, but does not track consumption amounts. The Service has shown that in areas where it tracks and monitors fuel information, positive results can be achieved. For example, the Service has been able to increase its tracking and monitoring through the use of the Voyager program and holiday jet fuel on the transportation side. The Voyager program’s ability to gather, track, and monitor data has resulted in direct fuel cost savings for the Service. The card provides significant amounts of transactional data such as cost, location, fuel type, timing, and quantity that is fed into two information systems—the eFleet program for the Postal-owned fleet and the eFuel system for the highway contractor fleet. According to Service officials, these systems require the monthly reconciliation of all purchases and programs designed to monitor potential fraud and abuse. These mechanisms contribute to cost savings and avoidance. Furthermore, data collected from these systems has been used by the Service to increase the accuracy of data for highway contractor fuel consumption. Improved data tracking and monitoring for the Service’s holiday jet fuel has resulted in improved and more accurate contracting and reported costs savings. On the facility side, the SES program requires specific tracking and monitoring of the overall performance (costs, savings, and changes in consumption) from these contracts. Furthermore, utility companies in the Pacific and New York areas have provided the Service metering equipment to track its fuel usage at designated Postal Service facilities. As discussed earlier, the Service has set annual facility fuel-related cost-saving targets that have allowed the Service to monitor and evaluate the performance of these initiatives. Considering the positive results associated with the tracking and monitoring under the Voyager card and SES programs, similar efforts could be beneficial in obtaining additional fuel-related cost saving opportunities. For example, a GSA building official stated that its efforts to track consumption data showed that nearly 60 of its owned or leased facilities accounted for almost half of its energy costs. GSA was able to target these facilities for their energy efficiency investments. The upcoming EPAct metering systems installation requirements provide an opportunity for the Service to make additional progress in tracking and monitoring its facility fuel consumption. The Executive Director for Energy Initiatives stated that financial and operational considerations need to be made due to the composition of the Service’s facility network— 34,000 facilities nationwide, many of which are less than 2,500 square feet. The Executive Director stated that the Service has some fuel information that provides guidance on which facilities are key candidates for energy efficiency investments. Specifically, the Service has identified 543 of its largest consuming facilities and is performing further reviews of these facilities. The Executive Director acknowledged, however, that improvements to the Service’s fuel information are needed and will be included as part of the Service’s upcoming energy strategy. More complete fuel cost and consumption information at its facilities would allow the Service to gain a better understanding of where investments could be made to reduce costs and improve fuel efficiency. Although the Service has purchased thousands of AFVs to comply with provisions of EPAct 1992 aimed at reducing reliance on petroleum-based fuels, financial and operational limitations have hindered the Service’s ability to use alternative fuels in these vehicles. The Service has increased its AFV fleet by nearly 20 percent from 2000 and currently possesses one of the largest alternative-fuel capable fleets in the federal government with nearly 40,000 AFVs. The majority of these vehicles are capable of operating on ethanol or compressed natural gas (CNG), and also include some that operate on electricity and liquefied petroleum gas. Most of the Service’s AFVs, however, do not operate using alternative fuels, but primarily use gasoline and diesel fuel. Alternative fuels accounted for roughly 1.5 percent of the total fuel consumed by the Service’s internal fleet in 2006. Financial and operational limitations associated with higher fuel and vehicle prices, lower fuel efficiencies, and an insufficient nationwide alternative fueling infrastructure have limited the Service’s use of alternative fuels. Postal Service officials stated these issues made operating its fleet on alternative fuels cost prohibitive. For example these officials stated that: The Service found that the cost for a gallon of ethanol 85 (E85) is typically 17 percent more expensive than gasoline, is 26 percent less efficient, and may result in higher maintenance costs because it is corrosive. There is a limited supply of AFVs available for purchase by the Service, and those that are available to the Service that meet the EPAct requirements contain larger engines than generally needed for delivery operations. As such, these unnecessarily large engines lower fuel efficiency when using gasoline or alternative fuels, and reduce the Service’s miles per gallon. The limited nationwide alternative fuel infrastructure has hindered some of its previous alternative fuel efforts. For example, the Service converted some of its vehicles to operate on CNG in the early 1990s. While this was successful in the short term, manufacturers that the Service worked with to produce the CNG vehicles went out of business or simply stopped producing the vehicles, and many fueling stations that had provided CNG stopped selling it, leading to a shortage in the fuel. Furthermore, even where alternative fuel pumps are available, their distance from a Postal Service facility may be too great to justify the costs to refuel at that pump. Service officials stated that only 0.6 percent of service stations across the country offer alternative fuels. Our past work, as well as officials from DOE and GSA have raised similar financial and operational limitations. We recently issued a report on the challenges associated with using alternative fuels, including that the nationwide alternative fuel infrastructure is poor to nonexistent throughout most of the country. For example, we reported that there are a limited number of E85 fueling stations nationwide (mostly concentrated in the upper Midwest), and that E85 cannot use the same infrastructure as gasoline because it is more corrosive. As of January 2007, the DOE Website indicates that only 1,003 E85 stations are located throughout the country. Recent studies conducted by DOE have found similar decreased fuel efficiency and increased cost results for ethanol. DOE is currently in the process of finalizing guidance on a waiver to EPAct for federal fleets based on factors that may include alternative fuel price and travel distance. A Service engineering director stated that discussions with DOE, automobile, fuel industry officials, and the Service about these financial and operational limitations have taken place, but progress has been difficult to achieve. This official stated that the Service’s demand for AFVs and alternative fuels is not large enough to result in significant changes to the availability and price of AFVs or to the nationwide alternative fuel infrastructure. We are continuing to look at issues surrounding the nationwide alternative fuel infrastructure and plan on issuing a report in the middle of 2007. Service officials also noted that they continue to look at alternative fuel vehicles and other options to improve vehicle fuel efficiency. For example, the Service has recently focused testing on hybrid vehicles. These officials noted, however, that while the mail delivery tests using hybrid vehicles are going very well and are conducive to the stop-and-go driving of mail delivery routes, hybrid vehicles are not considered AFVs and are ineligible for EPAct 2005 credit because they are powered primarily by standard gasoline. Nevertheless, the use of hybrids is consistent with the President’s recent executive order requiring federal agencies to cut their energy consumption by, among other actions, using hybrid cars. Officials also noted that the Service takes other actions to increase fuel efficiency, such as having regularly scheduled vehicle maintenance (oil changes, tire pressure checks, etc.) that is consistent with the specifications of the vehicle manufacturer. Another fuel efficiency option noted by a vehicle operations official is that most of the larger vehicles in the Service’s fleet were installed with bypass filters to minimize the intervals between oil replacements. However, he stated that using bypass filters on the smaller, delivery vehicles would not be cost-effective due to more expensive installation costs. Although the Service has taken some actions to mitigate fuel risk and contain costs that are generally consistent with practices advocated by leading organizations, it continues to be vulnerable to fuel price fluctuations and challenged to meet the more stringent 2005 EPAct requirements. The Service recognizes these challenges and is in the process of developing a strategic plan to guide future actions in this area. Immediate action is needed, however, to address deficiencies related to insufficient consumption data in some transportation and facility areas. Without sufficient consumption data, the Service will have difficulty understanding fuel consumption changes and identifying opportunities for additional cost savings. We recommend that the Postmaster General take actions to improve tracking and monitoring of transportation and facility-related fuel consumption data. Taking immediate actions to address the lack of consumption data will be important, even as the Service is developing a new energy strategy. We provided a draft of this report to the Service for its review and comment. The Service provided its comments in a letter from the Senior Vice President, Operations, dated January 19, 2007. These comments are summarized below and included in appendix II. The Service agreed with our findings and recommendation, and stated that it has started the process to improve the information systems needed to capture fuel consumption information. In its comments, the Service stated that it plans to increase the number of Postal-owned vehicles used by rural carriers. These efforts should increase the Service’s ability to track and monitor fuel usage due to the use of Voyager cards in Postal-owned vehicles. The Service also stated that it will be challenged by the EPAct 2005 requirements. For example, the Service commented on the limited availability of alternative fuel, and in particular, the increased cost and decreased efficiency associated with E85. We recognized these issues in our report and we are currently conducting additional work on alternative fuel infrastructure issues that is scheduled to be completed in mid-2007. The Service also commented on the financial challenges associated with the EPAct 2005 advanced metering requirement. It stated that many of its facilities are less than 10,000 square feet and requiring meters at all locations would not provide a reasonable return on investment. EPAct 2005 established a process for agencies to seek waivers to the metering requirements, DOE has established criteria for doing so, and the Service has indicated that it may seek waivers for certain facilities. Although we recognize that these financial and operational challenges exist, the Service has an opportunity to build on its positive efforts and make additional progress in meeting these requirements. For example, the Service reported installing metering systems at only 25 of its 34,000 facilities, and the Service could extend this practice to other facilities. Service officials stated that they have identified 543 of the Service’s largest energy consuming facilities, and the information gathered from analyzing these facilities may lead to practices that can also be applied to smaller facilities. Furthermore, the recent attention from the Administration and Congress on alternative fuel and energy conservation issues may provide an impetus for addressing some of these limitations that have hindered the Service’s progress. The Service stated in its comments that it would be pleased to contribute to a national strategic plan for meeting the EPAct alternative fuel consumption requirement. We are sending copies of this report to the Chairman of the House Committee on Oversight and Government Reform; the Chairman and Ranking Member of the House Subcommittee on the Federal Workforce, Postal Service, and the District of Columbia; the Chairman and Ranking Member of the Senate Committee on Homeland Security and Governmental Affairs; the Chairman and Ranking Member of the Senate Subcommittee on Federal Financial Management, Government Information, Federal Services, and International Security; the Postmaster General; and other interested parties. We also will provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at siggerudk@gao.gov or by telephone at (202) 512-2834. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix IV. For this report, our objectives were to review (1) how the Service’s fuel costs changed recently and the impact of these cost changes on the Service’s financial and operating conditions and (2) how the Service’s actions to control fuel costs and mitigate risk compare to leading practices and federal requirements. To describe how the U.S. Postal Service’s (the Service) fuel costs changed recently and what has been the impact of these cost changes on the Service’s financial and operating conditions, we first defined what would be included as fuels. For our analysis, we established the following two categories: 1. Transportation-related fuel, which included fuel used for highway, air, rail, and water transportation. The types of fuel included in this category were gasoline, diesel, jet fuel, biodiesel, ethanol, compressed natural gas, liquefied petroleum gas, and electricity. 2. Facility-related fuels, which included fuel used to heat and operate Postal Service facilities. The types of fuel included electricity, natural gas, heating oil, propane, steam, coal, and wood. In regards to including electricity as a type of fuel, we felt it was necessary because it was used both in transportation and facility heating and operations. We also analyzed trends in fuel prices from information available from the Energy Information Administration’s Web site, as well as through other Department of Energy (DOE) sources consistent with guidance from DOE officials. We also collected data on the following areas: Fuel cost data from the Service regarding its various fuel types, purchasing methods, and transportation methods. Due to data system issues from an organizational change in 2003, the Service was only able to provide this data for most areas for 2004, 2005, and 2006. The Service stated that it needed to estimate fuel costs for multiple purchasing methods because that data is not available to them. For example, the Service had to estimate fuel costs for air transportation contracts. Transportation and facility-fuel cost-saving initiatives. Although the Service has specific definitions for its strategies to reduce, avoid, or save costs (which are explained in Appendix III), for the purposes of this review, we considered them all cost-saving initiatives. Statistics from GSA’s Federal Fleet Report. Specific information on its internal vehicle fleet from standardized vehicle operations reports as well as its Shared Energy Savings projects from detailed presentations. Other financial and operating data from various Postal Service financial reports including its audited year-end Annual Reports and Comprehensive Statements, monthly Financial and Operating Statements, Quarterly Reports, and Integrated Financial Plan. We assessed the reliability of the fuel cost and savings data provided by the Service for inconsistencies and missing values. In those cases where we found discrepancies, we worked with the Service to address the problems. We determined that the data were sufficiently reliable for our review. We also reviewed the Service’s procedures for documenting, measuring, and reporting cost savings for its purchasing activities as well as the methodology for specific fuel-related initiatives. For the purposes of this engagement, the procedures and methodologies appeared to be reasonable and contain appropriate levels of review. We also interviewed various Service officials, including staff from the Transportation Asset Management group who procure petroleum-based fuels; the Office Products and Utilities Category Management Center who procure most facility-related fuels; Vehicle Operations; Vehicle Maintenance at Merrifield, VA; Engineering at Merrifield, VA; Environmental and Energy Management; and finance department to gather information on how the Service has been impacted by rising fuel costs. To assess the effectiveness of the Service’s actions to control fuel costs and mitigate risk, we compared these actions against practices advocated by leading organizations that could be applied to the Service’s fuel-related activities. We reviewed information from a variety of sources. These included our past work on fuel use and consumption and procurement leading practices, which included reviewing the purchasing efforts at various federal agencies (Departments of Defense, Veterans Affairs, Health and Human Services, Agriculture, Justice, and Transportation, and the U.S. Postal Service) as well as leading private organizations that were recognized for their acquisition services (IBM, ChevronTexaco, Bausch & Lomb, Delta Air Lines, and Dell). We also reviewed the Energy Policy Acts of 2005 and 1992, particularly the federal requirements and guidance pertaining to alternative fuel vehicles and facility energy management. We also interviewed officials from Department of Defense’s Defense Energy Support Center, General Services Administration, and Department of Energy whose operations focus on fuel use; a procurement expert affiliated with the Center for Strategic Supply Research who published a report on fuel procurement practices; various executives and contractors affiliated with the National Star Route Mail Contractors Association; as well as Postal Service officials. We also conducted a review of current literature on these topics. Based on this information, we identified key practices that focused on purchasing and consumption activities. The purchasing-related leading practices we identified were aggregating purchases to leverage buying power and size; enhancing organizational structure; utilizing public/private partnerships; and tracking and monitoring fuel information. The consumption-related leading practices we identified were reducing reliance and use of petroleum-based fuels and conserving energy use in facilities. We also discussed opportunities for further actions consistent with leading practices with the Service’s newly appointed Executive Director for Energy Initiatives. Our work was conducted from April 2006 to February 2007 in accordance with generally accepted government auditing standards. The Service’s purchasing organization, Supply Management, has specific procedures for documenting and evaluating the actions it takes to improve its financial condition. These procedures include actions that are taken to achieve cost savings, cost avoidance, or cost reductions. The following represent the definitions and methodology behind these three cost categories used by the Service. Cost Savings: Identifiable and measurable reduction in expenditures or costs that is the result of planned and deliberate supply chain management actions that return quantifiable dollar savings to the Service’s bottom line. Cost savings are the difference between baseline spend (historical, current market price or initial suppliers bid) accounted for in a prior or current year budget and actual spend achieved through planned and deliberate supply chain management actions for the same or comparable supplies, equipment, services, facilities or other supply chain activities. Cost savings are only recorded in the first year of supply chain management impact. After the budget has been adjusted to reflect the cost savings, all subsequent years of supply chain management impact related to these efforts are counted towards cost avoidance (see the definition for cost avoidance below.) Examples of cost savings include the following: Example A: A purchase cost reduction achieved over a historical or previously paid cost for the same products or services. Example B: An actual staffing or headcount reduction, which reduces Example C: An ownership cost reduction resulting from the elimination of expenses associated with receiving, holding, and/or distributing inventory. Savings related solely to general market trends, supplier price changes, or reduced expenditures do not qualify as supply chain management impact. Although these savings may have a bottom line benefit that is identifiable and measurable, they do not result from the planned and deliberate action or substantial involvement of supply management organizational area enabling or leading the supply chain management initiatives. Cost Reductions: Cost reductions are identifiable and measurable cost savings that are the result of a planned and deliberate supply chain management action that returns measurable savings to the Service’s bottom line. However, instead of reducing the bottom line, the Service has determined that these savings can be retained by the internal client/program office and reinvested to enhance related or new program initiatives. Cost Avoidance: Identifiable and measurable elimination of a new cost that would have otherwise occurred except for planned and deliberate supply chain management action. In all cases, cost avoidance results where a contractual obligation on the part of the Service has not yet been made. Cost avoidance is the difference between the average quoted, relevant market price or other acceptable industry pricing benchmark or baseline and the price paid, which could be more or less than the initial proposed price. The relevant market price is the price the Service would expect to pay in the absence of planned and deliberate supply chain management action. Cost avoidance captures the value of those initiatives that reduce the need for an expense or capital expenditure, which unless the supply chain management action were taken, would have resulted in a higher expense or capital cost to the Service. Examples of cost avoidance include: Example A: A price reduction for a unique or first time purchase, as well as for a purchase for which there is inadequate price history. Example B: A total cost of ownership analysis supporting the reuse of excess property and supplies versus purchasing new. Example C: A published supplier price increase that is negated or lowered through a particular supply chain management technique. Cost avoidance does not qualify as cost savings because the avoided cost is a “new” cost and, by definition, not included in prior year spend (or prior or current year budgets) and the avoidance has no direct dollar-for- dollar impact on the bottom line. Supply chain management impact is still created, however, because the cost avoidance minimizes or eliminates the negative impact on current or future year spend. In addition to the individual named above, Teresa Anderson, Joshua Bartzen, Kathy Gilhooly, Brandon Haller, Carol Henn, Daniel Paepke, Emily Rachman, and Karla Springer made key contributions to this report. | The U.S. Postal Service (the Service) is dependent on fuel to support its mail delivery and transportation networks, as well as to heat and operate the over 34,000 postal facilities it occupies. The Service has been challenged by recent fuel price fluctuations, and the Postmaster General stated that gas prices were a primary reason for the proposed 2007 postal rate adjustment. Based on this challenge, Congress asked GAO to review (1) how the Service's fuel costs changed recently and the impact of these cost changes on the Service's financial and operating conditions, and (2) how the Service's actions to control fuel costs and mitigate risk compare to leading practices and federal requirements. GAO collected fuel cost and price information; interviewed Service fuel officials; and compared the Service's actions against leading practices and federal requirements. The Service's transportation and facility fuel costs have grown in recent years as fuel prices, particularly for gasoline, diesel, and jet fuel have increased. For example, fuel cost growth for its vehicle fleet was due to rising prices rather than consumption. While fuel costs have directly pressured its financial condition, increasing compensation and benefits were the primary driver of the $3.4 billion operating expense increase in fiscal year 2006. The Service absorbed fuel cost increases through costcontainment efforts and increased revenues from the January 2006 rate increase, allowing it to achieve net income for the year. Nevertheless, the Service remains vulnerable to fuel price fluctuations, due in part to its purchasing process, which involves buying fuel as needed, often at retail locations. The Service is challenged to control fuel costs due to its expanding delivery network and inability to use surcharges. GAO found some of the Service's actions to control fuel costs to be generally consistent with procurement and consumption practices advocated by leading organizations and federal requirements for purchasing alternative fuel vehicles. However, GAO also identified areas where more actions could be taken. Taking actions to address data inconsistencies will be important, even as the Service develops a new energy strategy. These inconsistencies will limit the Service's ability to understand consumption changes and impacts and where to target potential cost-saving opportunities. Furthermore, additional progress is needed in reducing reliance on petroleum-based fuels because of the more stringent federal fuel consumption requirements that were recently passed. |
When the Congress passed title II of the Export Enhancement Act of 1992, it was concerned that the existing federal export promotion programs lacked coordination and an overall strategy. Before 1992, a business desiring to export goods or services faced a bureaucratic labyrinth of federal and state agencies to get information on markets, financing, insurance, methods, and restrictions on exporting. In addition, a business might have noted that there were over a dozen federal agencies with more than 100 export promotion programs. To improve export services, the Congress mandated, in the 1992 act, that the TPCC develop a governmentwide strategic plan that establishes priorities for federal activities supporting U.S. exports, and propose an annual unified federal trade promotion budget that supports the plan. The Congress also directed the Commerce Department to set up centralized export assistance centers. In addition, the act specified that the TPCC, chaired by the Secretary of Commerce, would report annually to the Congress on the national export strategy and its implementation. The first TPCC strategy, issued in 1993, identified impediments to effective delivery of export promotion services and made 65 recommendations to improve export promotion programs. Specifically, the TPCC expected that development of a government strategy would be helped by devising performance measures as a means to reallocate resources and attain a unified budget; establishment of partnerships with the public and private sectors would be assisted by such actions as simplifying federal services, setting up “one-stop shops” for exporters, and streamlining the export working capital programs of the Eximbank and the SBA; and provision of export services for U.S. exporters similar to those received by foreign competitors would help U.S. companies compete on a “level playing field” abroad. The TPCC has subsequently issued four further National Export Strategy reports. In these reports, the TPCC continued to highlight its pursuit of the major themes that emphasized achieving efficient delivery of export services. The later strategies also focused on specific initiatives such as improving trade data, establishing an advocacy network, and eliminating barriers to fair competition such as bribery and corruption. Our past work has focused on several elements of the governmentwide strategy to improve the delivery of export promotion programs. We will highlight the results of our work concerning the three broad and continuing themes of the TPCC strategy and then outline some issues that may be raised in the context of the overall governmentwide strategy. In 1996, we examined the TPCC’s progress toward establishing a governmentwide strategy for promoting exports and toward devising an annual unified federal budget to promote exports that reflects these priorities. At that time, we reported that governmentwide export promotion priorities were being identified in terms of foreign markets, export programs, and export policies and that agencies were exercising flexibility in focusing their efforts. The centerpiece of the strategy was the identification of the “big emerging markets” as priority markets for U.S. goods and services. We also examined whether the TPCC had proposed to the President an annual unified budget, as required by the Export Enhancement Act, that would support the strategic plan and eliminate funding for any areas of overlap and duplication. As we have testified in the past, one of the indicators of whether the unified budget is working would be whether the budget changed the distribution of resources to the various priorities, programs, and agencies. We found that the TPCC had prepared and included in the National Export Strategy budget presentations that displayed each member agency’s historical and prospective export expenditures on export promotion, using tables showing spending from different perspectives. For example, the 1997 National Export Strategy displayed the distribution of federal spending by budget authority and across various trade promotion categories, such as providing information counseling and export services, combating foreign export subsidies, and providing government advocacy. We observed that this step had helped foster a better understanding of federal expenditures for export promotion. However, we emphasized that performance measures would be needed to provide a basis for the allocation of export promotion resources. According to TPCC officials, they recently reviewed TPCC agency strategic plans as a step toward ensuring that the budget priorities are fully aligned with the TPCC’s commercial policy goals. Another major thrust of the TPCC’s efforts was to improve the delivery of federal export promotion services by developing greater cooperation among federal, public, and private entities. One significant effort was the creation of a nationwide network of 19 “one-stop-shops,” called U.S. Export Assistance Centers. The TPCC sought to present exporters with a “seamless” delivery of services rather than a confusing network of federal programs with multiple domestic offices. Another initiative was to help small- and medium-sized businesses by increasing the availability of export working capital and making the Eximbank’s and the SBA’s export working capital programs and procedures more streamlined, consistent, and simple. This “harmonization” initiative was to address exporter and lender concerns about overlap and confusion over federal program parameters and procedures. In creating the nationwide network of one-stop shops, representatives of the Department of Commerce and the SBA—two federal agencies with extensive export promotion field networks—were combined and, in some cases, the Eximbank representatives were included as well. These export assistance centers were designed to (1) provide exporters with information on all U.S. government export promotion and export finance services, (2) assist exporters in identifying which federal programs may be of greatest assistance, and (3) help exporters make contact with those federal programs. We reviewed the implementation of the first four export assistance centers and reported to the Congress in July 1996 on both the benefits realized as well as the opportunities for improving their operations. In general, we found that staff and customers of the four centers we visited believed that colocating agency staff helped U.S. firms gain access to and become more knowledgeable about a broader range of federal export services. We also identified specific initiatives at the centers that demonstrated the potential benefits that can be derived through working more closely with federal and nonfederal partner organizations. In addition, we identified steps that were needed to improve the delivery of services. For example, we found that the export assistance centers’ Directors did not have (1) the ability to affect interagency cooperation and teamwork and (2) adequate authority over center expenditures and an export assistance centerwide accounting system that would enable them to accurately identify and allocate costs and better manage expenditures. Moreover, we found that the assistance centers did not have an integrated client tracking system. According to the TPCC, one of the greatest obstacles to increased U.S. exports faced by small- and medium-sized businesses is the lack of sufficient working capital—capital that is used to finance the manufacture or purchase of goods and services. Since the Eximbank and the SBA have programs designed to increase the availability of export working capital for businesses, the TPCC recommended that the Eximbank and the SBA harmonize their programs and procedures to make them more streamlined, consistent, and simple. In February 1997, we reported that the Eximbank and the SBA had made progress in harmonizing certain aspects of their respective programs, including the loan guarantee coverage, the application form, and initial loan application fee. We also noted that each agency had taken other steps to improve program delivery, such as providing staff with export financing training, conducting seminars that were attended by lenders, and developing partnerships with both the private and public sectors. These partnerships include programs in which (1) exporters can have working capital guarantees processed and approved by a network of private sector lenders located in various states and (2) federal resources are leveraged through coguarantee agreements with state agencies. In addition, we had identified eight states that provided export working capital guarantees for small businesses during fiscal year 1996. Although the Eximbank and SBA programs were still not fully standardized at the time our report was issued, the steps toward harmonization and other program initiatives had helped to simplify the lending process, increase the number and value of loans guaranteed, and expand the number of exporters and lenders who participate in the programs. Our past work has also highlighted another potential opportunity to develop partnerships with public and private sector entities by sharing investment risk. In 1993, the TPCC recommended raising OPIC’s project limits for loans, guarantees, or insurance to better meet the rising demand by U.S. firms to finance major capital projects overseas. The increase in OPIC insurance cover from $100 million to $200 million (as well as its project financing limits) and the private sector’s willingness to have greater involvement in some emerging markets had created opportunities for OPIC to further reduce the risk in its insurance program by sharing the risk with other private or public partners. In recent work, we identified three potential options for sharing project risks. For example, OPIC could, on a case-by-case basis, share the risk of losses by reinsuring or coinsuring projects with the private insurers and sharing project risk with investors. Under the reinsurance scenario, OPIC could insure part of its high- and medium-risk portfolio with private sector insurance companies at mutually acceptable rates. OPIC could also coinsure projects with private or other public insurers. A third option could involve sharing project risk with investors by offering less then the 20-year standard insurance cover, as is the practice with other public insurers. In commenting on our report, OPIC officials told us that while reinsurance, coinsurance, and greater risk-sharing may be good risk mitigation strategies, they cautioned that they should maintain flexibility about when to use them so that OPIC can continue meeting U.S. foreign policy objectives and the needs of the customers. According to the TPCC, the competition for major procurements by foreign countries is fierce. Major foreign competitor nations, which have subsidized export programs, have become increasingly aggressive in helping their firms expand exports. In particular, the TPCC noted that the availability and competitiveness of export financing often played a decisive role in the export success of U.S. companies. In general, the U.S. approach has been to help neutralize foreign competitor nation support of its exporters by providing similar financing for U.S. exporters. This requires accurate information on the nature and extent of the foreign competitor programs. The U.S. approach has also involved working through the Organization for Economic Cooperation and Development (OECD) to seek international agreements to standardize practices, with the ultimate goal of reducing and eliminating export subsidies. Past GAO work has addressed the nature and extent of U.S. foreign competitors’ export finance programs, a key U.S. effort to combat foreign competitor practices, and opportunities for reducing the cost of the Eximbank’s programs while remaining competitive with programs of competitor export credit agencies. Over 70 countries have export credit agencies designed to help businesses export. Various methods can be used to measure the level of support provided by export credit agencies. One way is to look at support in terms of the share of financing commitments extended. About half of all export credit support extended in 1995 (the latest year for which comparable data are available) was provided by the seven largest industrial nations. Of this amount, Japan (56 percent), France (20 percent), and Germany (9 percent) accounted for the largest shares. The United States (the Eximbank) ranked fourth with Canada (each with 5 percent), followed by the United Kingdom (3 percent) and Italy (2 percent). Another way is to look at the percentage of national exports financed by the seven export credit agencies. Using this approach, the Eximbank is tied for last with 2 percent of total exports. In contrast, Japan supported 32 percent of its country’s exports, with France second at 18 percent. The support provided by Canada, Germany, the United Kingdom, and Italy ranged from 7 to 2 percent. Although these measurements do not show the Eximbank near the top of the highest Group of Seven countries providers, other measures present a different picture. For example, the Eximbank data show that it remains preeminent with respect to the number of markets for which unrestricted medium- and long-term cover is provided—more than twice as many markets as Canada, its nearest competitor. Although comparing the export credit agency programs is difficult, we studied the five largest exporting countries of the European Union and found that there is no single export finance model. One fundamental difference between the Eximbank and these export credit agencies is the concept of risk sharing. The Eximbank provides 100-percent, unconditional political and commercial risk protection on most of the medium- and long-term coverage it issues. The European agencies (with the exception of the United Kingdom) generally require exporters and banks to assume a portion of the risks (usually 5 to 10 percent) associated with such support. On the multilateral front, the United States has participated in negotiations with the OECD to implement agreements or initiate efforts to limit government subsidies and provide common guidelines for national export-financing assistance programs. The OECD’s Arrangement on Guidelines for Officially Supported Export Credits set terms and conditions for government-supported export loans. The agreement has been progressively strengthened since it was first established in 1978. Competitor’s tied aid practices are also of concern to the United States—in particular when contract awards for overseas projects are based on the availability of such concessional financing rather than on the basis of price and quality of the goods or services exported. Such practices can distort recipient countries’ development decisions and place U.S. exporters at a competitive disadvantage. Since the early 1980s, the United States has negotiated a series of increasingly stronger agreements within the OECD to restrict the use of distorting tied aid. In addition, in 1986 the Congress authorized the Eximbank to create a “war chest” fund to counter other countries’ use of tied aid offers. To meet foreign competitors’ use of tied aid, the TPCC recommended the development and implementation of strategies to further reduce the use of tied aid worldwide. In 1994, the Eximbank announced a new policy for responding to competitors’ tied aid offers. Rather than using the fund to enforce the OECD agreement, Eximbank was to become more actively involved in trying to deter tied aid at an earlier stage in a project’s development. The Eximbank’s policy was to issue tied aid “willingness to match” indications and “letters of interest,” which are contingent commitments to match foreign tied aid should it be offered. In 1995, we testified that although it was too early to determine the effect of the U.S. strategy, there were initial indications of progress. Our past work has also highlighted two broad options—raising fees for services and reducing program risks—that would allow the Eximbank to reduce subsidies while remaining competitive with foreign export credit agencies. These options would not require a change in the Eximbank’s present authority. However, we acknowledged that these options would need to be considered within the full context of their trade and foreign policy implications. One option for reducing subsidy costs at the Eximbank would be to increase the fees charged for its financing programs while still satisfying the congressional mandate for setting fees at levels that are fully competitive with competitor nation programs. To illustrate, we estimated that the Eximbank could have saved about $84 million in fiscal year 1995 if it had raised its fees to a level where they were at the mid-range (as low as or lower than 45 percent to 50 percent rather than at about 75 percent) of the fees charged by competitor nation programs in the same importing country. The U.S. government continues to use international forums such as the OECD to work toward reducing and eventually eliminating subsidized export finance programs. Since our report was issued, the OECD countries have made progress in establishing minimum fees across all major export credit agencies. In 1997, the OECD set a minimum fee for services, effective in April 1999. Given concerns about keeping the Eximbank’s programs competitive with its competitor nations’ programs, this agreement should provide the Eximbank with a greater opportunity to further reduce the costs of its operations by raising fees. Another option for reducing subsidy costs involves reducing program risks. As stated earlier, the Eximbank provides 100-percent, unconditional political and commercial risk protection on virtually all of the medium- and long-term cover that it issues. Some of the Eximbank’s major competitors, such as the European export credit agencies, on the other hand, generally require exporters and banks to assume a portion of the risks associated with such support and do not absorb 100 percent of the risks themselves. Instead, they require that exporters or banks assume a minimum percentage (usually 5 percent to 10 percent) of the risks. Five years have passed since the Congress mandated that the TPCC develop a governmentwide strategy for federal export promotion activities. A key question is whether federal export programs and resources are strategically focused to help U.S. businesses effectively compete in foreign markets. The federal strategy targets markets, centralizes export services, and addresses unfair barriers to exports. However, a driving force behind the passage of the Export Enhancement Act was to identify areas of overlap and duplication among the various federal export promotion activities and propose means of eliminating them. A key requisite to allocating export promotion resources is performance measures. The TPCC agencies have developed measures that indicate outputs such as volume of loans or number of clients and outcome measures such as numbers of exports generated by individual programs. However, these measures are not sufficient to address a core objective of whether these programs meet the TPCC’s strategic goals and efficiently and effectively serve their customers. Another key element of the act was to identify ways to develop closer partnerships with federal and nonfederal entities that provide similar export promotion services. While a number of initiatives have been taken, our past review of the export assistance centers raised several issues related to the integration of the delivery of services. The TPCC agencies have not since evaluated how effectively these centers are operating to achieve the intended objective of streamlining the delivery and quality of services to small- and medium-sized business. The TPCC plans to review the effectiveness of the centers in 1998. Until such a review occurs, we cannot know if these colocated agencies have integrated their services to effectively serve U.S. exporters. Mr. Chairman, this concludes our prepared remarks. We would be happy to respond to any questions you or other Task Force members may have. National Export Strategy (GAO/NSIAD-96-132R, Mar. 26, 1996). Export Promotion: Rationales for and Against Government Programs and Expenditures (GAO/T-GGD-95-169, May 23, 1995). Export Promotion: Governmentwide Plan Contributes to Improvements (GAO/T-GGD-94-35, Oct. 26, 1993). Export Promotion: Initial Assessment of Governmentwide Strategic Plan (GAO/T-GGD-93-48, Sept. 29, 1993). Export Promotion Strategic Plan: Will it Be a Vehicle for Change? (GAO/GGD-93-43, July 26, 1993). Export Promotion: Governmentwide Strategy Needed for Federal Programs (GAO/T-GGD-93-7, Mar. 15, 1993). Export Promotion: Federal Approach Is Fragmented (GAO/T-GGD-92-68, Aug. 10, 1992). Export Promotion: Overall U.S. Strategy Needed (GAO/T-GGD-92-40, May 20, 1992). Export Promotion: U.S. Programs Lack Coherence (GAO/T-GGD-92-19, Mar. 4, 1992). Export Promotion: Federal Programs Lack Organizational and Funding Cohesiveness (GAO/NSIAD-92-49, Jan. 10, 1992). Overseas Investment: Issues Related to the Overseas Private Investment Corporation’s Reauthorization (GAO/NSIAD-97-230, Sept. 8, 1997). Export Finance: Federal Efforts to Support Working Capital Needs of Small Business (GAO/NSIAD-97-20, Feb. 13, 1997). Export Import Bank: Options for Achieving Possible Budget Reductions (GAO/NSIAD-97-7, Dec. 20, 1996). Export Finance: Comparative Analysis of U.S. and European Union Export Credit Agencies (GAO/GGD-96-1, Oct. 24, 1995). Export Promotion: Improving Small Businesses’ Access to Federal Programs GAO/T-GGD-93-22, Apr. 28, 1993). U.S. Export Assistance Centers: Customer Service Enhanced, but Potential to Improve Operations Exists (GAO/T-NSIAD-96-213, July 25, 1996). One-Stop Shops (GAO/GGD-93-IR, Oct. 6, 1992). U.S. Agricultural Exports: Strong Growth Likely, but U.S. Export Assistance Programs’ Contribution Uncertain (GAO/NSIAD-97-260, Sept. 30, 1997). Export-Import Bank: Key Factors in Considering Eximbank Reauthorization (GAO/T-NSIAD-97-215, July 17, 1997). Export-Import Bank: Reauthorization Issues (GAO/T-NSIAD-97-147, Apr. 29, 1997). International Trade: U.S. Efforts to Counter Competitors’ Tied Aid Practices (GAO/T-GGD-95-128, Mar. 28, 1995). International Trade: Combating U.S. Competitors’ Tied Aid Practices (GAO/T-GGD-94-156, May 25, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed issues related to the U.S. government's role in promoting exports, focusing on the: (1) evolution of the government strategy designed to reshape federal export promotion activities; and (2) results and issues related to GAO's past work on U.S. government efforts to improve U.S. export promotion programs. GAO noted that: (1) Congress, in enacting the Export Enhancement Act of 1992, required the Trade Promotion Coordinating Committee (TPCC) to improve the delivery of export assistance to U.S. firms; (2) the TPCC efforts have focused on three broad areas: (a) devising a governmentwide strategy and a unified budget that would set priorities; (b) developing partnerships with all levels of government and the private sector; and (c) dealing with obstacles that U.S. businesses encounter as they compete against businesses supported by their foreign governments; (3) the TPCC has taken a number of steps in each of these areas, but some of their goals remain elusive; (4) with respect to the strategy, the cooperating agencies have established priority foreign markets and increased the visibility of the components and distribution of the aggregate federal expenditures on export promotion activities; (5) partnerships have been developed through a number of initiatives, including a network of export assistance centers to help unify the delivery of export promotion services, and making the U.S. Export-Import Bank's (Eximbank) and the Small Business Administration's (SBA) working capital program procedures more consistent and to enable exporters to receive financing more easily; (6) the TPCC has also worked to keep U.S. financing programs fully competitive; (7) GAO's work also suggests that outstanding issues remain regarding the effectiveness of the cooperative efforts of the agencies under the TPCC to achieve the congressional objectives of the 1992 legislation; (8) for example, while the expenditures of the 19 federal agencies are now clearly presented in one place, major challenges remain for achieving the unified budget that would align resources with national priorities and program performance; (9) while the Department of Commerce and the SBA have colocated in 19 cities, with some closer ties with Eximbank officials, no evaluation has been completed on how effectively these centers are operating to achieve the intended objective of streamlining the delivery and quality of services to small- and medium-sized businesses; and (10) the elapsing of 5 years since the passage of the Export Enhancement Act provides a good opportunity for Congress to assess the achievements and remaining challenges for the effort to strategize, streamline, and coordinate the wide array of federal export promotion efforts through the institutional mechanism of the TPCC. |
Financial institutions need systems to identify, assess, and manage risks to their operations from internal and external sources. These risk management systems are critical to responding to rapid and unanticipated changes in financial markets. Risk management depends, in part, on an effective corporate governance system that addresses risk across the institution and also within specific areas of risk, including credit, market, liquidity, operational, and legal risk. The board of directors, senior management (and its designated risk-monitoring unit), the audit committee, internal auditors, and external auditors, and others have important roles to play in an effectively operating risk-management system. The different roles that each of these groups play represent critical checks and balances in the overall risk-management system. Since 1991, the Congress has passed several laws that emphasize the importance of internal controls including risk management at financial institutions and the Committee of Sponsoring Organizations of the Treadway Commission (COSO) has issued guidance that management of financial institutions could use to assess and evaluate its internal controls and enterprisewide risk management. Following the savings and loan crisis in the 1980s, the Federal Deposit Insurance Corporation Improvement Act of 1991 (FDICIA) strengthened corporate governance in large U.S. banks and thrifts. FDICIA required management to annually assess its system of internal control over financial reporting and the external auditors to attest to management’s assertions. The corporate governance model established under FDICIA emphasized strong internal control systems, proactive boards of directors, and independent, knowledgeable audit committees. During 1992, and with a subsequent revision in 1994 COSO issued its Internal Control – Integrated Framework. The COSO Framework set out criteria for establishing key elements of corporate governance, especially the “tone at the top.” The framework also set forth the five components of an effective system of internal control: control environment, risk assessment, control activities, information and communication, and monitoring. With the failures of Enron and WorldCom, Congress passed the Sarbanes- Oxley Act of 2002 (SOX) which required managements of public companies to assess their systems of internal control with external auditor attestations, though the implementation for smaller public companies has been gradual and is not yet complete. Under section 404 of SOX, the SEC required that management identify what framework it used to assess the system of internal control over financial reporting. Though it did not mandate any particular framework, the SEC recognized that the COSO Framework satisfied the SEC’s own criteria and allowed its use as an evaluation framework. In 2004, COSO issued Enterprise Risk Management – Integrated Framework (ERM Framework), though it is not a binding framework for any particular entity or industry. The ERM Framework, which encompasses the previous internal control framework, establishes best practices and expands the criteria and tools that management can use to assess whether it has an effective risk management system. The framework encourages the board of directors and senior management, in their corporate governance roles, to set the risk appetite of the entity, which is the amount of risk the entity is willing to accept in its overall strategy. Management further sets risk objectives to achieve the entity’s goals and sets risk tolerances to ensure that the risk appetite is not exceeded. Regulators also have a role in assessing risk management at financial institutions. In particular, oversight of risk management at large financial institutions is divided among a number of regulatory agencies. The Federal Reserve oversees risk management at bank holding companies and state member banks that are members of the Federal Reserve System; OTS oversees thrift holding companies and thrifts; SEC and FINRA oversee risk management at SEC-registered U.S. broker-dealers; and OCC oversees risk management at national banks. The Federal Reserve and OTS have long had authority to supervise holding companies. The Federal Reserve’s authority is set forth primarily in the Bank Holding Company Act of 1956, which contains the supervisory framework for holding companies that control commercial banks. OTS’s supervisory authority over thrift holding companies is set forth in the Home Owners Loan Act. In the Gramm-Leach-Bliley Act of 1999 (GLBA), Congress expanded the range of permissible holding company activities and affiliations and also set forth restrictions and guidance on how those companies should be supervised. However, Congress did not clearly express the aims of holding company supervision. GLBA authorizes the Federal Reserve and OTS to examine the holding company and each subsidiary in order to: (a) inform the regulator of “the nature of the operations and financial condition” of the holding company and its subsidiaries; and (b) inform the regulator of the financial and operational risks within the holding company system that may threaten the safety and soundness of the holding company’s bank subsidiaries and the systems for monitoring and controlling such risks; and (c) monitor compliance with applicable federal laws. On the other hand, GLBA specifies that the focus and scope of examinations of holding companies and any of their subsidiaries shall “to the fullest extent possible” be limited to the holding company and “any subsidiary that could have a materially adverse effect on the safety and soundness of a depository institution subsidiary” due to the size, condition or activities of the nonbank subsidiary or the nature or size of transactions between that subsidiary and the banking subsidiary. In our work over the years, we have encountered a range of perspectives on the focus of holding company examinations, some of which emphasize the health of the depository institution as the primary examination focus and some of which look more expansively to the holding company enterprise under certain conditions. In addition to the provisions generally applicable to holding company supervision, GLBA also limits the circumstances under which both holding company regulators and depository institution regulators may examine functionally regulated subsidiaries of bank holding companies, such as broker-dealers. Gramm-Leach-Bliley permits holding company regulators to examine functionally regulated subsidiaries only under certain conditions, such as where the regulator has reasonable cause to believe that the subsidiary is engaged in activities that pose a material risk to an affiliated bank or that an examination is necessary to obtain information on financial and operational risks within the holding company system that may threaten an affiliated bank’s safety and soundness. The examination authority of depository institution regulators permits the examination of bank affiliates to disclose fully an affiliate’s relations with the bank and the effect of those relations on the bank. However, with respect to functionally regulated affiliates of depository institutions, Gramm-Leach- Bliley imposes the same restraint on the use of examination authority that applies to OTS and the Federal Reserve with respect to holding companies. That is, Gramm-Leach-Bliley instructs that bank and holding company supervisors generally are to limit the focus of their examinations of functionally regulated affiliates and, to the extent possible, are to reply on the work of primary bank and functional regulators that supervise holding company subsidiaries. An example of this situation would be where a holding company has a national bank or thrift subsidiary and a broker-dealer subsidiary. Under GLBA, the holding company regulator is to rely “to the fullest extent possible” on the work of primary bank and functional regulators for information on the respective entities. Also under GLBA, bank supervisors are similarly limited with respect to affiliates of the institutions they supervise. SEC’s authority to examine U.S. broker-dealers is set forth in the Securities and Exchange Act of 1934. Under the 1934 act, SEC’s examination authority over broker-dealers does not permit SEC to require examination reports on affiliated depository institutions, and if SEC seeks non-routine information about a broker-dealer affiliate that is subject to examination by a bank regulator, SEC must notify and generally must consult with the regulator regarding the information sought. Oversight of U.S. broker-dealers is performed by SEC’s Division of Trading and Markets (Trading and Markets) and Office of Compliance, Inspections, and Examinations (OCIE). In addition, SEC delegates some of its authority to oversee U.S. broker-dealers to FINRA, a self-regulatory organization that was established in 2007 through the consolidation of NASD and the member regulation, enforcement and arbitration functions of the New York Stock Exchange. Under the alternative net capital rule for broker-dealers, from 2005-2008, SEC conducted a voluntary consolidated-supervised entity program under which five investment bank holding companies voluntarily consented to having SEC oversee them on a consolidated basis. Today, no institutions are subject to SEC oversight at the consolidated level, but several broker- dealers within bank holding companies are still subject to the alternative net capital rule on a voluntary basis. The Federal Reserve, FINRA, OCC, OTS, and SEC each identify areas of risk relating to the large, complex financial institutions they oversee and examine risk management systems at regulated institutions. However, the banking and securities regulators take different approaches. The banking regulators (Federal Reserve, OCC, and OTS) use a combination of supervisory activities, including informal tools and examination-related activities to assess the quality of institutional risk management systems and assign each institution an annual rating. SEC and FINRA aggregate information from officials and staff of the supervised institutions throughout the year to identify areas of concern across all broker-dealers. For those broker-dealers covered by the alternative net capital rule, SEC and FINRA emphasize compliance with that rule during target examinations. Under the CSE program, SEC continuously supervised and monitored the institutions in the program. Banking regulators carry out a number of supervisory activities in overseeing risk management of large, complex financial institutions. To conduct on-site continuous supervision, banking regulators often station examiners at specific institutions. This practice allows examiners to continuously analyze information provided by the financial institution, such as board meeting minutes, institution risk reports/management information system reports, and for holding company supervisors supervisory reports provided to other regulators, among other things. This type of supervision allows for timely adjustments to the supervisory strategy of the examiners as conditions change within the institution. Bank examiners do not conduct a single annual full-scope examination of the institution. Rather, they conduct ongoing examinations that target specific areas at the institutions (target examinations) and annually issue an overall rating on the quality of risk management. Each regulator had a process to assess risk management systems. While each included certain core components, such as developing a supervisory plan and monitoring, the approach used and level of detail varied. The Federal Reserve’s guidance consisted of a detailed risk assessment program that included an analytic framework for developing a risk management rating for holding companies. Unlike most bank regulatory examination guidance, this guidance is not yet publicly available. According to Federal Reserve officials, the primary purpose of the framework is to help ensure a consistent regulatory approach for assessing inherent risk and risk management practices of large financial institutions (the holding company) and make informed supervisory assessments. The Federal Reserve program for large complex banking organizations is based on a “continuous supervision” model that assigns a dedicated team to each institution. Those teams are responsible for completing risk assessments, supervisory plans, and annual assessments. The risk assessment includes an evaluation of inherent risk (credit, market, operational, liquidity, and legal and compliance) and related risk management and internal controls. The risk assessment is often the starting point for the supervisory plan as well as a supporting document for the annual assessment. The annual assessment requires the dedicated team to evaluate and rate the firm’s risk management, its financial condition, and the potential impact of its non-depository operations on the depository institution. To apply the risk or “R” rating, the examiner must consider (1) board of director and senior management oversight; (2) policies, procedures, and limits; (3) risk monitoring and management information system; and (4) internal controls for each of the risk areas. The examiners then provide an overall “R” rating for the institution. OCC’s onsite examiners assess the risks and risk management functions at large national banks using a detailed approach that is similar to that used by the Federal Reserve’s examiners. The core assessment is OCC’s primary assessment tool at the institutional level. According to OCC’s guidance, its examiners are required to assess the quality, quantity, and overall direction of risks in nine categories (strategic, reputation, credit, interest rate, liquidity, price, foreign currency translation, transaction, and compliance). To determine the quality of risk management, OCC examiners assess policies, processes, personnel, and control systems in each category. This risk assessment is included in the examination report that is sent to the bank’s board of directors. OCC also provides a rating based on the bank’s capital, asset quality, management, earnings, liquidity, and sensitivity to market risk (the CAMELS rating), all of which can be impacted by the quality of a risk management system. OCC’s supervisory strategy or plan for targeted examinations is developed from this Risk Assessment System. Examiners can change a bank’s ratings at any time if the bank’s conditions warrant that change. Targeted examinations are a key component of OCC’s oversight. Based on the materials we reviewed covering the last 2 years, OCC conducted 23 targeted examinations in 2007 and 45 in 2008 at a large national bank. These examinations focused on specific areas of risk management, such as governance, credit, and compliance. Recently revised OTS guidance requires its examiners to review large and complex holding companies to determine whether they have a comprehensive system to measure, monitor, and manage risk concentrations, determine the major risk-taking entities within the overall institution, and evaluate the control mechanisms in place to establish and monitor risk limits. OTS’s recently revised guidance on assessing risk management includes a risk management rating framework that is similar to the Federal Reserve’s. It includes the same risk management rating subcomponents—governance/board and senior management oversight; policies, procedures, and limits; risk monitoring and management information systems, and internal controls—and criteria that the Federal Reserve applies to bank holding companies. However, OTS considers additional risk areas, such as concentration or systemic risk. Starting in 2007, OTS used a risk matrix to document the level of 13 inherent risks by business unit. The matrix also includes an assessment of each unit’s risk mitigation or risk management activities, including internal controls, risk monitoring systems, policies/procedures/limits, and governance. OTS began using the risk matrix to develop its supervisory plan. Based on our review of examination materials, OTS conducted targeted examinations on risk management in such areas as consumer lending and mortgage-backed securities. In the last few years, the banking regulators have also conducted examinations that covered several large complex financial institutions on specific issues such as risk management (horizontal examinations). According to the Federal Reserve, horizontal examinations focus on a single area or issue and are designed to (1) identify the range of practices in use in the industry, (2) evaluate the safety and soundness of specific activities across business lines or across systemically important institutions, (3) provide better insight into the Federal Reserve’s understanding of how a firm’s operations compare with a range of industry practices, and (4) consider revisions to the formulation of supervisory policy. During the period of our review, the Federal Reserve completed several horizontal examinations on large, complex banking organizations, including stress testing and collateral management. According to Federal Reserve officials, examiners generally provide institutions with feedback that tells them generally how they are doing relative to their peers, and if any serious weaknesses were identified, these would be conveyed as well. With the Federal Reserve, OCC conducted a horizontal examination on advanced credit risk practices and OTS conducted a review across institutions for nontraditional mortgages and used the findings to issue supplemental guidance. According to an OCC official, the regulator uses the findings in horizontal reviews as a supervisory tool and to require corrective actions, as well as a means to discover information on bank practices to issue supplemental guidance. SEC and FINRA generally assess risk management systems of large broker-dealers using discrete, but risk-focused examinations. The focus of SEC and FINRA oversight is on compliance with their rules and the Securities and Exchange Act of 1934. Although SEC and FINRA are in continuous contact with large, complex institutions, neither SEC nor FINRA staff conduct continuous onsite monitoring of broker-dealers that involves an assessment of risks. FINRA’s coordinator program is continuous supervision, albeit not on site. According to SEC and FINRA, however, they receive financial and risk area information on a regular basis from the largest firms and those of financial concern through the OCIE compliance monitoring program, the FINRA capital alert program, and regular meetings with the firms. To identify risks, they aggregate information from their officials and staff throughout the year to identify areas that may require special attention across all broker-dealers. SEC and FINRA conduct regularly scheduled target examinations that focus on the risk areas identified in their risk assessment and on compliance with relevant capital rules and customer protection rules. SEC’s internal controls risk management examinations, which started in 1995, cover the top 15 wholesale and top 15 retail broker-dealers as well as a number of mid-sized broker-dealers with a large number of customer accounts. At the largest institutions, SEC conducts examinations every three years, while FINRA conducts annual examinations of all broker-dealers. According to Trading and Markets, the CSE program was modeled on the Federal Reserve’s holding company supervision program, but continuous supervision was usually conducted off site by a small number of examiners, SEC did not rate risk management systems, nor use a detailed risk assessment processes to determine areas of highest risk. During the CSE program, Trading and Markets staff concentrated their efforts on market and liquidity risks because the alternative net capital rule focused on these risks and on operational risk because of the need to protect investors. According to OCIE, their examiners focused on market, credit, operational, legal and compliance risks, as well as senior management, internal audit and new products. Because only five investment banks were subject to consolidated supervision by SEC, SEC staff believed it did not need to develop an overall supervisory strategy or written plans for individual institutions it supervised; however, OCIE drafted detailed scope memorandums for their target examinations. While no institutions are subject to consolidated supervision by SEC at this time, a number of broker-dealers are subject to the alternative net capital rule. SEC and FINRA conduct horizontal or “sweep” examinations and, for example, have completed one for subprime mortgages. OCIE officials said that it had increased the number of these types of examinations since the current financial crisis began. Under the consolidated supervised entity program, Trading and Markets conducted several horizontal examinations aimed at discovering the range of industry practice in areas such as leveraged lending. The banking regulators have developed guidance on how they should communicate their examination findings to help ensure that financial institutions take corrective actions. Bank regulators generally issue findings or cite weaknesses in supervisory letters or an annual examination report addressed to senior management of the financial institution. However, regulators also meet with institution management to address identified risk management weaknesses. Examples include: After a target examination, the Federal Reserve, OCC, OTS each prepare supervisory letters or reports of examination identifying weaknesses that financial institutions are expected to address in a timely manner. In addition to issues or findings, the Federal Reserve and OCC supervisory letters provided a specific timeframe for the institution to send a written response to the bank regulator articulating how the institution planned to address the findings. In these instances, for the files we reviewed, the institutions complied with the timeframes noted in the supervisory letter. These letters may be addressed to the board of directors or the CEO or as we found, the senior managers responsible for the program. For example, a Federal Reserve Bank addressed a recent targeted examination on a holding company’s internal audit function to the chief auditor of the holding company. Similarly, OCC addressed an examination of advanced risk management processes to a bank’s chief credit officer. OTS also addressed some reports of target examinations to senior managers responsible for specific programs. In their supervisory letters, OCC sometimes identifies “Matters Requiring Attention,” which instruct the bank to explain how it will address the matter in a timely manner. In its supervisory guidance, matters requiring attention include practices that deviate from sound governance, internal control and risk management principles that may adversely impact the bank’s earning or capital, risk profile, or reputation if not addressed. According to its guidance, OCC tracks matters requiring attention until they are resolved and maintains a record when these matters are resolved and closed out. OCC also includes recommendations to national banks in their supervisory letters. In addition, OCC will insert recommendations in their letters which are suggestions relating to how a bank can operate a specific program or business line more effectively. After the beginning of the financial crisis, the Federal Reserve issued revised examination guidance in July 2008 that established three types of findings: matters requiring immediate attention, matters requiring attention, and observations. Previously, each of the individual Federal Reserve Banks had its own approach to defining findings. Matters requiring attention and observations are similar to related practices followed by OCC. For matters requiring immediate attention, the matter is considered more urgent. According to their guidance, matters requiring immediate attention encompass the highest priority concerns and include matters that have the potential to pose significant risk to the organization’s safety and soundness or that represent significant instances of noncompliance with laws and regulations. OTS examiners may list recommendations in the report, findings, and conclusions, but in the materials we reviewed examiners did not report these in a standard way. While members of the Board of Directors are required to sign the report of annual examination indicating that they have read the report, they are not required to submit a written response. The OTS Handbook Section 060 Examination Administration provides guidance on the use of “matters requiring board attention” or other lesser supervisory corrective actions that should be addressed in the examination correspondence. According to OTS, matters requiring board attention and corrective actions are also tracked in its regulatory action system for follow up. For 2008, we reviewed one regulator’s tracking report of matters requiring attention at one institution and found that only a small number of the 64 matters requiring attention relating to risk management and internal controls had been closed out or considered addressed by the end of January 2009. The examiners explained that some matters, such as institutions making adjustments to their technology framework can be time consuming. Another regulator told us that it does not track when institutions have implemented remedial actions. Because the banking regulators are generally on site and continuously monitoring large, complex institutions, examiners told us that a significant part of their efforts to improve risk management systems were undertaken through regularly scheduled meetings with senior management. According to Federal Reserve and OCC officials, these meetings allow opportunities for examiners to followup with management concerning actions that they expect the financial institutions to implement. A Federal Reserve examiner explained that several meetings were held with officials at a holding company concerning an internal control matter in order to help ensure that the institution was addressing the issue. For its complex and international organizations program, OTS directs its examiners to use regular meetings with senior management and periodic meetings with boards of directors and any relevant committees to effect change. OTS guidance indicates that examiners’ regular meetings with senior management are designed to communicate and address any changes in risk profile and corrective actions. OTS also views annual meetings with the Board of Directors as a forum for discussing significant findings and management’s approach for addressing them. In addition to these tools, bank regulators’ approval authorities related to mergers and acquisitions could be used to persuade institutions to address risk management weaknesses. For example, the Federal Reserve, OCC, and OTS are required to consider risk management when they approve bank or thrift acquisitions or mergers and could use identified weaknesses in this area to deny approvals. In addition, bank regulators have to approve the acquisition of bank charters and must assess management’s ability to manage the bank or thrift charter being acquired. If SEC’s OCIE or FINRA examiners discover a violation of SEC or FINRA rules, the institution is required to resolve the deficiency in a timely manner. OCIE developed guidance on deficiency letters for examinations. According to SEC and FINRA staff, because SEC or FINRA rules do not contain specific requirements for internal controls, problems with internal controls generally are not cited as deficiencies. However, weaknesses in internal controls can rise to such a level as to violate other FINRA rules, such as supervision rules. Deficiencies and weaknesses are followed up on in subsequent examinations. OCIE’s compliance audits require institutions to correct deficiencies and address weaknesses. OCIE staff told us that if the institutions do not address deficiencies in a timely manner, they may be forwarded to the enforcement division. For example, OCIE staff was able to discuss limit violations with one firm and required the firm to change their risk limit system to significantly reduce their limit violations—indicating senior management was taking steps to better oversee and manage their risks. Under the consolidated supervised entity program, SEC’s Trading and Markets relied on discussions with management to effect change. For example, Trading and Market staff told us that they had discussions with senior management that led to changes in personnel. In the years leading up the financial crisis, some regulators identified weaknesses in the risk management systems of large, complex financial institutions. Regulators told us that despite these identified weaknesses, they did not take forceful action—such as changing their assessments— until the crisis occurred because the institutions reported a strong financial position and senior management had presented the regulators with plans for change. Moreover, regulators acknowledged that in some cases they had not fully appreciated the extent of these weaknesses until the financial crisis occurred and risk management systems were tested by events. Regulators also acknowledged they had relied heavily on management representations of risks. In several instances, regulators identified shortcomings in institutions’ oversight of risk management at the limited number of large, complex institutions we reviewed but did not change their overall assessments of the institutions until the crisis began in the summer of 2007. For example, before the crisis one regulator found significant weaknesses in an institution’s enterprisewide risk management system stemming from a lack of oversight by senior management. In 2006, the regulator notified the institution’s board of directors that the 2005 examination had concluded that the board and senior management had failed to adequately oversee financial reporting, risk appetite, and internal audit functions. The regulator made several recommendations to the board to address these weaknesses. We found that the regulator continued to find some of the same weaknesses in subsequent examination reports, yet examiners did not take forceful action to require the institution to address these shortcomings until the liquidity crisis occurred and the severity of the risk management weaknesses became apparent. When asked about the regulator’s assessment of the holding company in general and risk management in particular given the identified weaknesses, examiners told us that they had concluded that the institution’s conditions were adequate, in part, because it was deemed to have sufficient capital and the ability to raise more. Moreover, the examiners said that senior management had presented them with plans to address the risk management weaknesses. In another example, other regulators found weaknesses related to an institution’s oversight of risk management before the crisis. One regulator issued a letter to the institution’s senior management in 2005 requiring that the institution respond, within a specified time period, to weaknesses uncovered in an examination. The weaknesses included the following: The lack of an enterprisewide framework for overseeing risk, as specified in the COSO framework. The institution assessed risks (such as market or credit risks) on an individual operating unit basis, and was not able to effectively assess risks institutionwide. A lack of common definitions of risk types and of corporate policy for approving new products, which could ensure that management had reviewed and understood any potential risks. An institutional tendency to give earnings and profitability growth precedence over risk management. In addition, the regulator recommended that senior management restructure the institution’s risk management system to develop corporate standards for assessing risk. However, the regulator’s assessment of the institution’s risk management remained satisfactory during this period because senior management reported that they planned to address these weaknesses and, according to examiners, appeared to be doing so. Moreover, the examiners believed that senior management could address these weaknesses in the prevailing business environment of strong earnings and adequate liquidity. After earnings and liquidity declined during the financial crisis that began in 2007, the examiners changed their assessment, citing many of the same shortcomings in risk management that they had identified in 2005. At one institution, a regulator noted in a 2005 examination report that management had addressed previously identified issues for one type of risk and that the institution had taken steps to improve various processes, such as clarifying the roles and responsibilities of risk assessment staff, and shortening internal audit cycles of high-risk entities in this area. Later in 2007, the regulator identified additional weaknesses related to credit and market risk management. Regulatory officials told us that weaknesses in oversight of credit and market risk management were not of the same magnitude prior to the crisis as they were in late 2007 and 2008. Moreover, examiners told us that it was difficult to identify all of the potential weaknesses in risk management oversight until the system was stressed by the financial crisis. Some regulators told us that they had relied on management representation of risk, especially in emerging areas. For example, one regulator’s targeted review risk relied heavily on management’s representations about the risk related to subprime mortgages— representations that had been based on the lack of historical losses and the geographic diversification of the complex product issuers. However, once the credit markets started tightening in late 2007, the examiners reported that they were less comfortable with management’s representations about the level of risk related to certain complex investments. Examiners said that, in hindsight, the risks posed by parts of an institution do not necessarily correspond with their size on the balance sheet and that relatively small parts of the institution had taken on risks that the regulator had not fully understood. Another regulator conducted a horizontal examination of securitized mortgage products in 2006 but relied on information provided by the institutions. While the report noted that these products were experiencing rapid growth and that underwriting standards were important, it focused on the major risks identified by the firms and their actions to manage those risks as well as on how institutions were calculating their capital requirements. Regulators also identified weaknesses in the oversight and testing of risk models that financial institutions used, including those used to calculate the amount of capital needed to protect against their risk exposures and determine the valuation of complex products. Regulators require institutions to test their models so that the institutions have a better sense of where their weaknesses lie, and OCC developed guidance in 2000 related to model validation that other regulators consider to be the standard. OCC’s guidance states that institutions should validate their models to increase reliability and improve their understanding of the models’ strengths and weaknesses. The guidance calls for independent reviews by staff who have not helped to develop the models, instituting controls to ensure that the models are validated before they are used, ongoing testing, and audit oversight. The process of model validation should look not only at the accuracy of the data being entered into the model, but also at the model’s assumptions, such as loan default rates. Institutions use capital models as tools to inform their management activities, including measuring risk-adjusted performance, setting prices and limits on loans and other products, and allocating capital among various business lines and risks. Certain large banking organizations have used models since the mid-1990s to calculate regulatory capital for market risk, and the rules issued by U.S. regulators for Basel II require that banks use models to estimate capital for credit and operational risks. The SEC’s consolidated supervised entity program allowed broker-dealers that were part of consolidated supervised entities to compute capital requirements using models to estimate market and credit risk. In addition, institutions use models to estimate the value of complex instruments such as collateralized debt obligations (CDOs). Regulators identified several weaknesses related to financial institutions’ oversight and use of risk models: One regulator found several weaknesses involving the use of models that had not been properly tested to measure credit risks, an important input into institutions’ determinations of capital needed, but did not aggressively take steps to ensure that the firm corrected these weaknesses. In a 2006 letter addressed to the head of the institution’s risk management division, the examiners reported deficiencies in models used to estimate credit risk, including lack of testing, a lack of review of the assumptions used in the models, and concerns about the independence of staff testing the models. The regulator issued a letter requiring management to address these weaknesses, but continued to allow the institution to use the models and did not change its overall assessment. Although the institution showed improvement in its processes, over time, in late 2007, examiners found that some of the weaknesses persisted. In late 2008, examiners closed the matter in a letter to management but continued to note concerns about internal controls associated with risk management. A horizontal review of credit risk models by the Federal Reserve and OCC in 2008 found a similar lack of controls surrounding model validation practices for assessing credit risks, leading to questions about the ability of large, complex institutions to understand and manage these risks and provide adequate capital to cushion against potential losses. For example, the review found that some institutions lacked requirements for model testing, clearly defined roles and responsibilities for testing, adequate detail for the scope or frequency of validation, and a specific process for correcting problems identified during validation. Before the crisis, another regulator found that an institution’s model control group did not keep a complete inventory of its models and did not have an audit trail for models prior to 2000. The examiners said that they did not find these issues to be significant concerns. However, they were subsequently criticized for not aggressively requiring another institution to take action on weaknesses they had identified that were related to risk models, including lack of timely review, understaffing, lack of independence of risk managers, and an inability or unwillingness to update models to reflect the changing environment. Other regulators noted concerns about pricing models for illiquid instruments, but made these findings only as the crisis was unfolding. For example, in a 2007 horizontal review of 10 broker-dealers’ exposure to subprime mortgage-related products, SEC and FINRA examiners found weaknesses in pricing assumptions in valuation models for complex financial products. They found that several of these firms relied on outdated pricing information or traders’ valuations for complex financial transactions, such as CDOs. In some cases, firms could not demonstrate that they had assessed the reasonableness of prices for CDOs. Another regulator noted in a 2007 targeted examination that although management had stated that the risk of loss exposure from highly rated CDOs was remote, the downturn in the subprime mortgage market could mean that they would not perform as well as similarly rated instruments performed historically. Because of the inherent limitations of modeling, such as the accuracy of model assumptions, financial institutions also use stress tests to determine how much capital and liquidity might be needed to absorb losses in the event of a large shock to the system or a significant underestimation of the probability of large losses. According to the Basel Committee on Banking Supervision, institutions should test not only for events that could lower their profitability, but also for rare but extreme scenarios that could threaten their solvency. In its January 2009 report, the Basel Committee emphasized the importance of stress testing, noting that it could (1) alert senior management to adverse unexpected losses, (2) provide forward- looking assessments of risk, (3) support enterprisewide communication about the firm’s risk tolerance, (4) support capital and liquidity planning procedures, and (5) facilitate the development of risk mitigation or contingency plans across a range of stressed conditions. Moreover, the report noted that stress testing was particularly important after long periods of relative economic and financial calm when companies might become complacent and begin underpricing risk. We found that regulators had identified numerous weaknesses in stress testing at large institutions before the financial crisis. However, our limited review did not identify any instances in which an institution’s lack of worst-case scenario testing prompted regulators to push forcefully for institutional actions to better understand and manage risks. A 2006 Federal Reserve horizontal review of stress testing practices at several large, complex banking institutions revealed that none of the institutions had an integrated stress testing program that incorporated all major financial risks enterprisewide, nor did they test for scenarios that would render them insolvent.. The review found that institutions were stress testing the impact of adverse events on individual products and business lines rather than on the institution as a whole. By testing the response of only part of the institution’s portfolio to a stress such as declining home prices, the institution could not see the effect of such a risk on other parts of its portfolio that could also be affected. The review was particularly critical of institutions’ inability to quantify the extent to which credit exposure to counterparties might increase in the event of a stressed market risk movement. It stated that institutions relied on “intuition” to determine their vulnerability to this type of risk. It also found that institutions’ senior managers were confident in their current practices and questioned the need for additional stress testing, particularly for worst- case scenarios that they thought were implausible.. The 2006 review included some recommendations for examiners to address with individual institutions, and Federal Reserve officials told us that they met with institutions’ chief risk officers to discuss the seriousness of the findings just before the crisis began. However, officials told us that the purpose of the review was primarily to facilitate the regulator’s understanding of the full range of stress testing practices, as there was neither a well-developed set of best practices nor supervisory guidance in this area at the time. The regulatory officials also told us that these findings were used to inform guidance issued by the President’s Working Group on assessing exposure from private pools of capital, including hedge funds. However, this guidance focuses on testing the exposure to counterparty risks, such as from hedge funds, and not on testing the impact of solvency-threatening, worst-case scenarios. In hindsight, officials told us that the current crisis had gone beyond what they had contemplated for a worst-case scenario, and they said that they would probably have faced significant resistance had they tried to require the institutions to do stress tests for scenarios such as downgrades in counterparties’ credit ratings because such scenarios appeared unlikely. Other regulators raised concerns about stress testing at individual institutions, but we did not find evidence that they had effectively changed the firms’ stress testing practices. In the materials we reviewed, one regulator recommended that the institution include worst-case scenarios in its testing. In a 2005 examination report, examiners noted a concern about the level of senior management oversight of risk tolerances. This concern primarily stemmed from lack of documentation, stress testing, and communication of firm risk tolerances and the extent to which these were reflected in stress tests. While the firm later took steps to document formal risk tolerances and communicate this throughout the firm, the recommendation related to stress testing remained open through 2008. Another regulator required institutions to show that they conducted stress tests of the institution’s ability to have enough funding and liquidity in response to certain events, including a credit downgrade or the inability to obtain unsecured, short-term financing. In addition, institutions were required to document that they had contingency plans to respond to these events. The regulator said that it specifically required institutions to conduct stress tests such as those based on historical events including the collapse of Long-Term Capital Management or the stock market decline of 1987. However, regulatory staff told us that the liquidity crisis of 2008 was greater than they had expected. In this and other work, we identified two specific shortcomings of the current regulatory system that impact the oversight of risk management at large, complex financial institutions. First, no regulator has a clear responsibility to look across institutions to identify risks to overall financial stability. As a result, both banking and securities regulators continue to assess risk management primarily at an individual institutional level. Even when regulators perform horizontal examinations across institutions, they generally do not use the results to identify potential systemic risks. Although for some period, the Federal Reserve analyzed financial stability issues for systemically important institutions it supervises, it did not assess the risks on an integrated basis or identify many of the issues that just a few months later led to the near failure of some of these institutions and to severe instability in the overall financial system. Second, although financial institutions manage risks on an enterprisewide basis or by business lines that cut across legal entities, primary bank and functional regulators may oversee risk management at the level of a legal entity within a holding company. As a result, their view of risk management is limited or their activities overlap or duplicate those of other regulators including the holding company regulator. In previous work, we have noted that no single regulator or group of regulators systematically assesses risks to the financial stability of the United States by assessing activities across institutions and industry sectors. In our current analysis of risk management oversight of large, complex institutions, we found that, for the period of the review (2006- 2008), the regulators had not used effectively a systematic process that assessed threats that large financial institutions posed to the financial system or that market events posed to those institutions. While the regulators periodically conducted horizontal examinations in areas such as stress testing, credit risk practices, and risk management for securitized mortgage products, these efforts did not focus on the stability of the financial system, nor were they used as a way to assess future threats to that system. The reports summarizing the results of these horizontal examinations show that the purpose of these reviews was primarily to understand the range of industry practices or to compare institutions rather than to determine whether several institutions were engaged in similar practices that might have a destabilizing effect on certain markets and leave the institutions vulnerable to those and other market changes, and that these conditions ultimately could affect the stability of the financial system. Beginning in 2005 until the summer of 2007, the Federal Reserve made efforts to implement a systematic review of financial stability issues for certain large financial institutions it oversees and issued internal reports called Large Financial Institutions’ Perspectives on Risk. With the advent of the financial crisis in the summer of 2007, the report was suspended; however, at a later time the Federal Reserve began to issue risk committee reports that addressed risks across more institutions. While we commend the Federal Reserve for making an effort to look systematically across a group of institutions to evaluate risks to the broader financial system, the Perspectives of Risk report for the second half of 2006 issued in April 2007 illustrates some of the shortcomings in the process. The report reviewed risk areas including credit, market, operational, and legal and compliance risk but did not provide an integrated risk analysis that looked across these risk areas—a shortcoming of risk management systems identified in reviews of the current crisis. In addition, with hindsight, we can see that the report did not identify effectively the severity and importance of a number of factors. For example, it stated that: There are no substantial issues of supervisory concern for these large financial institutions. Asset quality across the systemically important institutions remains strong. In spite of predictions of a market crash, the housing market correction has been relatively mild, and while price appreciation and home sales have slowed and inventories remain high, most analysts expect the housing market to bottom out in mid-2007. The overall impact on a national level will likely be moderate; however, in certain areas housing prices have dropped significantly. The volume of mortgages being held by institutions—warehouse pipelines—has grown rapidly to support collateralized mortgage- backed securities and CDOs. Surging investor demand for high-yield bonds and leveraged loans, largely through structured products such as CDOs, provided continuing strong liquidity that resulted in continued access to funding for lower- rated firms at relatively modest borrowing costs. Counterparty exposures, particularly to hedge funds, continue to expand rapidly. With regard to the last point, a Federal Reserve examiner stated that the Federal Reserve had taken action to limit bank holding company exposures to hedge funds. The examiner noted that although in hindsight it was possible to see some risks that the regulators had not addressed, it was difficult to see the impact of issues they had worked to resolve. When asked for examples of how the Federal Reserve had used supervisory information in conjunction with its role to maintain financial stability, a Federal Reserve official provided two examples that he believed illustrated how the Federal Reserve’s supervisory role had influenced financial stability before the current financial crisis. First, the official said that the Federal Reserve had used supervisory information to improve the resilience of the private sector clearing and settlement infrastructure after the attacks on the World Trade Center on September 11, 2001. Second, it had worked through the supervisory system to strengthen the infrastructure for processing certain over-the-counter derivative transactions. Federal Reserve officials noted that financial stability is not the sole focus of safety and soundness supervision and that several mechanisms exist in which regulation plays a significant role with other areas of the Federal Reserve in assessing and monitoring financial stability. Federal Reserve regulators indicated that other Federal Reserve functions often consulted with them and that they provided information to these functions and contributed to financial stability discussions, working groups, and decisions both prior to and during the current crisis. In October 2008, the Federal Reserve issued new guidance for consolidated supervision suggesting that in the future the agency would be more mindful of the impact of market developments on the safety and soundness of bank holding companies. The new guidance says, for instance, that the enhanced approach to consolidated supervision emphasizes several elements that should further the objectives of fostering financial stability and deterring or managing financial crises and help make the financial system more resilient. The guidance says that two areas of primary focus would be: activities in which the financial institutions play a significant role in critical or key financial markets that have the potential to transmit a collective adverse impact across multiple firms and financial markets, including the related risk management and internal controls for these activities, and areas of emerging interest that could have consequences for financial markets, including, for example, the operational infrastructure that underpins the credit derivatives market and counterparty credit risk management practices. Some regulators have noted that the current practice of assessing risk management at the level of a depository institution or broker-dealer did not reflect the way most large, complex institutions manage their risks. Regulators noted that financial institutions manage some risks enterprisewide or by business lines that cross legal entity boundaries. The scope of regulators’ supervisory authorities does not clearly reflect this reality, however. As set forth in the Gramm-Leach-Bliley Act, various regulators can have separate responsibilities for individual components of a large, complex financial institution. In addition, GLBA generally restricts the focus of holding company examinations to the holding company and any subsidiary that could have a materially adverse effect on the safety and soundness of an affiliated bank. OCC examiners told us that it was difficult for them to assess a bank’s market risk management because OCC focused on the national bank’s activities, while the financial institution was managing risk across the bank and the broker-dealer. The examiners said that in some cases the same traders booked wholesale trades in the bank and in the broker-dealer and that the same risk governance process applied to both. Thus, both the primary bank regulator and the functional regulator were duplicating each other’s supervisory activities. In addition, if initial transactions were booked in one entity, and transactions designed to mitigate the risks in that transaction were booked in another legal entity, neither regulator could fully understand the risks involved. While effective communication among the functional and primary bank regulators could address this limitation, securities regulators told us that they shared information with the Federal Reserve but generally did not share information with OCC. OCC examination materials show that examiners sometimes assessed risks and risk management by looking at the entire enterprise. In addition, OCC examiners often met with holding company executives. In previous work, we noted the likelihood that OCC’s responsibilities and activities as the national bank regulator overlap with the responsibilities and activities of the Federal Reserve in its role as the holding company regulator. We found in this review that this overlap continued to exist; however, we also continued to observe that OCC and the Federal Reserve share information and coordinate activities to minimize the burden to the institution. Securities regulators face similar challenges in assessing risk management at broker-dealers. In a number of past reports, we have highlighted the challenges associated with SEC’s lack of authority over certain broker- dealer affiliates and holding companies. FINRA officials also cited two examples of limitations on their efforts to oversee risk management within broker-dealers. First, they noted that FINRA’s regulatory authority extended only to U.S. broker-dealers and that related transactions generally are booked in other legal entities. FINRA noted that the riskiest transactions were usually booked in legal entities located offshore. FINRA also noted that often inventory positions booked in the U.S. broker-dealer might hedge the risk in another affiliated legal entities. From time to time, FINRA has requested that the U.S. broker-dealer move the hedge into the broker-dealer to reduce the amount of the losses and protect the capital base of the broker-dealer. An SEC official noted that to take advantage of certain capital treatment the transaction and the hedge would both need to be booked in the broker-dealer. Second, FINRA officials noted that their view was limited because market risk policy is set at the holding company level. In closing, I would like to reiterate a number of central themes that have appeared often in our recent work. While an institution’s management, directors, and auditors all have key roles to play in effective corporate governance, regulators—as outside assessors of the overall adequacy of the system of risk management—also have an important role in assessing risk management. The current financial crisis has revealed that many institutions had not adequately identified, measured, and managed all core components of sound risk management. We also found that for the limited number of large, complex institutions we reviewed, the regulators failed to identify the magnitude of these weaknesses and that when weaknesses were identified, they generally did not take forceful action to prompt these institutions to address them. As we have witnessed, the failure of a risk management system at a single large financial institution can have implications for the entire financial system. Second, while our recent work is based on a limited number of institutions, examples from the oversight of these institutions highlight the significant challenges regulators face in assessing risk management systems at large, complex institutions. While the painful lessons learned during the past year should bolster market discipline and regulatory authority in the short term, history has shown that as the memories of this crisis begin to fade, the hard lessons we have learned are destined to be repeated unless regulators are vigilant in good times as well as bad. Responsible regulation requires that regulators critically assess their regulatory approaches, especially during good times, to ensure that they are aware of potential regulatory blind spots. This means constantly reevaluating regulatory and supervisory approaches and understanding inherent biases and regulatory assumptions. For example, the regulators have begun to issue new and revised guidance that reflects the lessons learned from the current crisis. However, the guidance we have seen tends to focus on the issues specific to this crisis rather than on broader lessons learned about the need for more forward-looking assessments and on the reasons that regulation failed. Finally, I would like to briefly discuss how our current regulatory framework has potentially contributed to some of the regulatory failures associated with risk management oversight. The current institution-centric approach has resulted in regulators all too often focusing on the risks of individual institutions. This has resulted and in regulators looking at how institutions were managing individual risks, but missing the implications of the collective strategy that was premised on the institution’s having little liquidity risk and adequate capital. Whether the failures of some institutions ultimately came about because of a failure to manage a particular risk, such as liquidity or credit risks, these institutions often lacked some of the basic components of good risk management—for example, having the board of directors and senior management set the tone for proper risk management practices across the enterprise. The regulators were not able to connect the dots, in some cases because of the fragmented regulatory structure. While regulators promoted the benefits of enterprisewide risk management, we found that they failed to ensure that all of the large, complex financial institutions in our review had risk management systems commensurate with their size and complexity so that these institutions and their regulators could better understand and address related risk exposures. This concludes my prepared statement. I would be pleased to answer any questions that you may have at the appropriate time. For further information about this testimony, please contact Orice M. Williams on (202) 512-8678 or at williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Barbara Keller, Assistant Director; Nancy Barry, Emily Chalmers, Clayton Clark, Nancy Eibeck, Kate Bittinger Eikel, Paul Thompson, and John Treanor. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Financial regulators have an important role in assessing risk management systems at financial institutions. Analyses have identified inadequate risk management at large, complex financial institutions as one of the causes of the current financial crisis. The failure of the institutions to appropriately identify, measure, and manage their risks has raised questions not only about corporate governance but also about the adequacy of regulatory oversight of risk management systems. GAO's objectives were to review (1) how regulators oversee risk management at these institutions, (2) the extent to which regulators identified shortcomings in risk management at certain institutions prior to the summer of 2007, and (3) how some aspects of the regulatory system may have contributed to or hindered the oversight of risk management. GAO built upon its existing body of work, evaluated the examination guidance used by examiners at U.S. banking and securities regulators, and reviewed examination reports and work papers from 2006-2008 for a selected sample of large institutions, and horizontal exams that included additional institutions. In January 2009, GAO designated the need to modernize the financial regulatory system as a high risk area needing congressional attention. Regulatory oversight of risk management at large, financial institutions, particularly at the holding company level, should be considered part of that effort. The banking and securities regulators use a variety of tools to identify areas of risk and assess how large, complex financial institutions manage their risks. The banking regulators--Federal Reserve, Office of the Comptroller of the Currency (OCC), and the Office of Thrift Supervision (OTS)--and securities regulators--Securities and Exchange Commission (SEC) and the Financial Industry Regulatory Authority (FINRA)--use somewhat different approaches to oversee risk management practices. Banking examiners are assigned to continuously monitor a single institution, where they engage in targeted and horizontal examinations and assess risks and the quality of institutions' risk management systems. SEC and FINRA identify areas of high risk by aggregating information from examiners and officials on areas of concern across broker-dealers and by monitoring institutions. SEC and FINRA conduct discrete targeted and horizontal examinations. The banking regulators focused on safety and soundness, while SEC and FINRA tended to focus on compliance with securities rules and laws. All regulators have specific tools for effecting change when they identify weaknesses in risk management at institutions they oversee. In the examination materials GAO reviewed for a limited number of institutions, GAO found that regulators had identified numerous weaknesses in the institutions' risk management systems before the financial crisis began. For example, regulators identified inadequate oversight of institutions' risks by senior management. However, the regulators said that they did not take forceful actions to address these weaknesses, such as changing their assessments, until the crisis occurred because the institutions had strong financial positions and senior management had presented the regulators with plans for change. Regulators also identified weaknesses in models used to measure and manage risk but may not have taken action to resolve these weaknesses. Finally, regulators identified numerous stress testing weaknesses at several large institutions, but GAO's limited review did not identify any instances in which weaknesses prompted regulators to take aggressive steps to push institutions to better understand and manage risks. Some aspects of the regulatory system may have hindered regulators' oversight of risk management. First, no regulator systematically looks across institutions to identify factors that could affect the overall financial system. While regulators periodically conducted horizontal examinations on stress testing, credit risk practices, and risk management for securitized mortgage products, they did not consistently use the results to identify potential systemic risks. Second, primary bank and functional regulators' oversee risk management at the level of the legal entity within a holding company while large entities manage risk on an enterprisewide basis or by business lines that cut across legal entities. As a result, these regulators may have only a limited view of institutions' risk management or their responsibilities and activities may overlap with those of holding company regulators. |
Ex-Im is an independent agency operating under the Export-Import Bank Act of 1945, as amended. Its mission is to support the export of U.S. goods and services, thereby supporting U.S. jobs. Ex-Im’s charter states that it should not compete with the private sector. Rather, Ex-Im’s role is to assume the credit and country risks that the private sector is unable or unwilling to accept, while still maintaining a reasonable assurance of repayment. As a result, when private-sector lenders reduced the availability of their financing after the 2007-2009 financial crisis, demand for Ex-Im products correspondingly increased. Ex-Im operates in several functional areas under the leadership of a chairman and president. Functional areas include the Small Business Group, Office of the Chief Financial Officer, Office of Resource Management, and Export Finance Group. The Export Finance Group is, in turn, subdivided into business units for certain types of transactions, such as Trade Finance, Transportation, Structured and Project Finance, and Renewable Energy. Ex-Im offers a number of export financing products, including direct loans, loan guarantees, and export credit insurance. Ex-Im makes fixed-rate loans directly to international buyers of goods and services. These loans can be medium-term (more than 1 year up to 7 years and less than $10 short-term (up to 1 year), million), or long-term (including transactions of more than 7 years or $10 million and higher and longer than 1 year). Ex-Im also guarantees loans made by private lenders to international buyers of goods or services, committing to pay the lenders if the buyers default. Like direct loans, loan guarantees may be short-, medium-, or long-term. Additionally, Ex-Im provides export credit insurance products that protect the exporter from the risk of nonpayment by foreign buyers for commercial and political reasons. This allows U.S. exporters the ability to offer foreign purchasers the opportunity to make purchases on credit. Credit insurance policies can cover a single buyer or multiple buyers and be short- or medium-term. Ex-Im’s short-term insurance covers a wide range of goods, raw materials, spare parts, components, and most services on terms, in most cases, of up to 180 days. Medium-term insurance policies protect longer-term financing to international buyers of capital equipment or services, covering one or a series of shipments. Ex-Im’s long-term products are often used to finance transportation projects, in project finance transactions, and for what Ex-Im calls “structured finance.” In dollar terms, transportation projects primarily support the purchase of aircraft. In project finance, Ex-Im lends to newly created project companies in foreign countries and looks to the project’s future cash flows as the source of repayment. Project finance transactions have repayment terms up to 14 years, and renewable energy transactions have repayment terms up to 18 years. In structured finance transactions, Ex-Im provides direct loans or loan guarantees to existing companies located overseas. Structured finance transactions generally have repayment terms of 10 years, but some transactions may have terms of 12 years. Congress has limited the extent of potential losses to the government from Ex-Im transactions by placing a cap on Ex-Im’s total amount of outstanding loans, guarantees, and insurance—the exposure limit. In the May 30, 2012 reauthorization, Congress increased Ex-Im’s exposure limit to $120 billion, with provisions for additional increases to $130 billion in 2013, and $140 billion in 2014. When Ex-Im authorizes additional loans, guarantees, and insurance, its exposure grows. When authorizations are repaid or cancelled, Ex-Im’s exposure is reduced (see fig. 1). To forecast its exposure for the September 2012 Business Plan, Ex-Im’s Office of the Chief Financial Officer used a model that took the bank’s July 2012 actual exposure, added the amount of authorizations forecast by Ex-Im’s business units, and subtracted the estimated amount of repayments and cancellations based on the forecast authorizations and assumptions about the portfolio composition. Ex-Im’s actual exposure at the end of 2012 was $106.6 billion, and Ex-Im’s Business Plan forecasts exposure to increase to $120.2 billion at the end of 2013 and $134.9 billion at the end of 2014. Ex-Im’s annual authorizations have increased. Overall, in nominal dollars, annual Ex-Im authorizations rose from $14.4 billion in 2008 to $35.8 billion in 2012 (see fig. 2). Annual authorizations for new project and structured finance transactions increased from $1.9 billion in 2008 to $12.6 billion in 2012—accounting for almost half of Ex-Im’s 2012 long- term authorizations. Aircraft-related authorizations grew from $5.7 billion in 2008 to $11.9 billion in 2012—an increase of about 110 percent—and accounted for about one-third of Ex-Im’s authorizations in 2012. While long-term authorizations make up the largest part of Ex-Im’s portfolio in dollar terms, more than 80 percent of Ex-Im transactions are short-term. While Ex-Im’s business is generally driven by demand for its services from exporters, Congress has also mandated that Ex-Im support specific objectives. The Reauthorization Act requires Ex-Im to analyze its ability to meet, and its risk of loss from complying with, three congressional mandates. Since the 1980s, Congress has required that Ex-Im make available a certain percentage of its total export financing each year for small business. In 2002, Congress increased the small business financing requirement from 10 to 20 percent. Congress further mandates that Ex-Im promote the expansion of its financial commitments in sub-Saharan Africa under Ex-Im’s loan, guarantee, and insurance programs. Finally, in its 2012 appropriations, Congress directed that “not less than 10 percent of the aggregate loan, guarantee, and insurance authority available to [Ex- Im]… should be used for renewable energy technologies or end-use energy efficiency technologies,” to which we refer as the renewable energy mandate. Ex-Im faces multiple risks when it extends export credit financing, including credit, political, market, concentration, foreign-currency, and operational risks. Ex-Im uses its resources to manage risks through (1) underwriting, (2) monitoring and restructuring, and (3) recovery of claims. Underwriting: During underwriting, Ex-Im first uses its Country Limitation Schedule to determine whether it can provide financing for transactions in the country. If the transaction meets the requirements of the Country Limitation Schedule, Ex-Im reviews the transaction and assigns it a risk rating based on its assessment of the creditworthiness of the obligors and to establish whether there is a reasonable assurance of repayment. Ex- Im’s risk ratings range from 1 (least risky) to 11 (most risky). Ex-Im generally does not authorize transactions with risk ratings over 8. Monitoring and Restructuring: Ex-Im updates the risk ratings of medium- and long-term transactions above $1 million at least annually to reflect any changes in credit risk. Ex-Im also may restructure individual transactions with credit weaknesses to help prevent defaults and increase recoveries on transactions that default. Recovery of Claims: Ex-Im pays a claim when a loan that it has guaranteed or an insurance policy that it has issued defaults. Ex-Im tries to minimize losses on claims paid by pursuing recovery of the amount of claims it paid. For example, it can collect on the assets of the obligors or the collateral for a transaction. Ex-Im uses a loss estimation model to estimate credit subsidy costs and loss reserves and allowances for these risks. Ex-Im annually updates its loss model, and the model is subsequently reviewed by OMB. The expected loss model calculates loss rates based on historical data (the default and loss history of prior loan guarantee and insurance transactions as well as variables that can be used to predict defaults and losses, such as transaction amount and length, obligor type, product type, and risk rating) and qualitative factors (minimum loss rate, global economic risk, and region, industry, and aircraft portfolio obligor concentration risk) to account for risks associated with the agency’s current portfolio. The model calculates a loss rate (the percentage loss that Ex-Im can expect for each dollar of export financing) for each Ex-Im risk rating and product type. The loss rates produced by the model are then used to estimate future cash flows (repayments, fees, recoveries, and claims) for the business Ex-Im expects in the upcoming year. As of December 31, 2012, Ex-Im reported a default rate for its active portfolio of 0.34 percent. Ex-Im uses OMB’s credit subsidy calculator to determine the credit subsidy costs for existing transactions in its portfolio and projected future transactions based on its estimated future cash flows. These credit subsidy estimates are reported in the President’s budget. Ex-Im also uses the estimated future cash flows to calculate the loss reserves or allowances—financial reporting accounts for estimated losses—it needs for each new authorized transaction. Each year, Ex-Im adjusts this loss reserve or allowance amount for each transaction using updated estimates of future cash flows. In addition to these existing procedures, in January 2013, Ex-Im completed a comprehensive revision of its policies and procedures manual that covers each stage of risk management. According to Ex-Im officials, Ex-Im also has been reviewing and responding to several recommendations on risk management from internal and external auditors, OMB, Ex-Im’s Inspector General, and GAO. Inspector General and GAO recommendations include performing and reporting of stress testing, retaining point-in-time historical data on credit performance, setting soft portfolio sublimits (informal thresholds for the portion of total exposure within different segments of the portfolio), and establishing a chief risk officer position. The Ex-Im Business Plan concluded that the exposure limits Congress placed on the bank in the Reauthorization Act were appropriate, but the exposure forecast model Ex-Im used to justify its conclusion relied on authorization forecasts and assumptions about repayments that have a degree of uncertainty that was not accounted for in Ex-Im’s forecast. Based on its estimates of authorizations and repayments, Ex-Im projects its exposure to rise to within $5.1 billion of its $140 billion limit by the end of 2014. Although this exposure is closer to its exposure limit than it has been at year-end in recent years, it supports Ex-Im’s conclusion that the congressional limits are appropriate. However, in developing its estimated authorizations for the Business Plan, Ex-Im used the same forecasting process it used for its recent budget estimates, which were between 11 and 42 percent below actual authorizations. Ex-Im used the same assumptions about repayments as it used in previous years, but did not check these key assumptions against previous experience or report the sensitivity of the model to its assumptions. Alternative forecasts using authorizations and repayments estimated based on previous Ex-Im results produce exposure estimates that would be higher than Ex-Im’s limit for 2014, raising concerns about Ex-Im’s conclusion that its limits are appropriate. Ex-Im’s Business Plan stated that the exposure limits for 2012, 2013, and 2014 were appropriate and sufficient for the bank to satisfy anticipated demand for Ex-Im financing under current market conditions. Ex-Im forecast that its exposure in 2013 and 2014 would be below its limits by $9.8 and $5.1 billion, respectively, preserving a small buffer for Ex-Im to respond to market changes and unforeseen increases in demand, allow for variance in its estimates, and signal to U.S. exporters and foreign buyers that Ex-Im support would be available for credit-worthy projects. Ex-Im forecast that its year-end exposure would be $105.8 billion in 2012, $120.2 billion in 2013, and $134.9 billion in 2014, below the congressionally determined exposure limits of $120 billion, $130 billion, and $140 billion, respectively (see fig. 3). The buffer between actual exposure and the exposure limit that Ex-Im’s Business Plan forecast for 2012, 2013, and 2014 is small in comparison with recent historical experience. Between 2003 and 2008, Ex-Im’s exposure hovered around $60 billion, well below its exposure limit. During the fiscal crisis in 2009, Ex-Im’s exposure began an upward trend, reducing the buffer between actual exposure and the exposure limit. By the end of 2011, Ex-Im’s exposure rose to 89 percent of its limit. At the end of 2012 Ex-Im’s exposure limit had increased to $120 billion, but Ex- Im’s exposure also increased and remained at 89 percent of the limit. Ex- Im’s Business Plan forecasts that further increases will bring exposure to 92 percent of its limit at the end of 2013 and 96 percent at the end of 2014. In dollars, Ex-Im forecasts that it will be $5.1 billion below its $140 billion exposure limit at the end of 2014. According to Ex-Im, at the time of the exposure limit increase from $100 billion to $120 billion (on May 30, 2012), Ex-Im was approaching its maximum permitted exposure and was monitoring its authorizations and repayments but not delaying any authorizations. Although Ex-Im did not have to take such measures at that time, if Ex-Im were to approach its exposure limit in the future, it might need to take actions such as delaying authorizations to prevent exceeding its exposure limit. The accuracy of Ex-Im’s 2013 and 2014 exposure forecasts is uncertain, but the plan’s forecast underestimated Ex-Im’s 2012 exposure by about $900 million for the 2 months of 2012 remaining at the time it prepared the plan. Ex-Im prepared the plan’s 2012 year-end exposure estimate in August 2012. At that time, Ex-Im took its known exposure at the end of July 2012, $99 billion, and estimated the authorizations, repayments, and cancellations that would occur in August and September to determine the year-end 2012 exposure. Ex-Im forecast that $10 billion in additional authorizations in those months would be offset by $3.3 billion in repayments and cancellations—to result in an additional $6.7 billion in exposure in the next 2 months. However, by the end of September, Ex- Im’s actual exposure had increased by $7.6 billion, 13 percent higher. Ex-Im’s authorization forecast for August and September was within 0.3 percent of the actual authorizations in those 2 months, suggesting that the forecast error resulted from an overestimate of the repayments and cancellations that reduce exposure. Ex-Im’s Business Plan forecast $38.4 billion in authorizations in 2013 and $42.7 billion in 2014, with 77 percent of the value of forecast authorizations consisting of long-term transactions including transportation and project and structured finance. According to Ex-Im’s Office of the Chief Financial Officer, Ex-Im used the same process to estimate authorizations for the Business Plan that it had used in previous years to estimate authorizations for its annual budget estimates. Ex-Im estimated long-term authorizations in the plan based on an analysis of its pipeline of in-house applications and expected applications, in which customers are in consultation with Ex-Im. For example, Ex-Im reviews aircraft production and delivery schedules to determine when financing for new aircraft is expected to be needed. Long-term transactions have a consultation and application period of between 6 months and 3 years. According to Ex-Im officials, the lead time for the largest project and structured finance transactions is generally at the upper end of this range, giving Ex-Im a more specific basis for its estimates within that time horizon. Ex-Im forecast the average size for individual long-term structured finance transactions in 2013 at $389 million, and $478 million in 2014. Individual transportation authorizations for aircraft included in the 2013 and 2014 forecasts average approximately $266 and $203 million, respectively. The remaining 23 percent of Ex-Im’s forecast authorizations are short- and medium-term. Ex-Im estimated these based on information gathered from Ex-Im partner banks—as well as Ex-Im officials’ own sense of overall market trends. Ex-Im short- and medium-term transactions averaged approximately $2.2 million in 2012. Ex-Im’s Business Plan asserts that the pipeline approach has been demonstrated to be the most effective forecasting methodology, but also notes that large swings in the amount of transportation and project and structured finance authorizations may occur due to fluctuations in overall market conditions or situations unique to the transaction. According to Ex- Im, it is less likely that authorizations for aircraft or larger project and structured finance authorizations would appear unexpectedly or not occur, but these transactions may be delayed and their amount may fluctuate. Smaller project and structured finance transactions and nonaircraft transportation authorizations may have shorter lead times of several months. Thus, they can be presented to Ex-Im and authorized within 2013 or 2014 without Ex-Im having been aware of them in August 2012, when it prepared the Business Plan. Ex-Im’s short- and medium- term transactions generally have shorter lead times than long-term transactions, increasing the uncertainty of Ex-Im’s forecast for these transactions in future years. However, because of their generally smaller size, it would take far more change in the number or size of these transactions to affect Ex-Im’s overall authorization or exposure estimates. Since the submittal of the plan in September 2012, the size of some Ex- Im forecast authorizations has fluctuated, as the plan noted could occur. Approximately 6 months after preparing the plan, in February 2013, Ex-Im management reviewed its 2013 authorization forecasts as part of its internal planning. As of March 28, 2013, Ex-Im reduced its 2013 estimate by $2.6 billion (6.9 percent) to $35.8 billion. Ex-Im reduced its 2013 transportation and structured finance authorizations but did not change other 2013 forecasts. Changes in Ex- Im’s forecast resulted from transactions no longer expected to be completed in 2013 (decrease of $5.7 billion), changes in the size of specific authorizations still forecast to occur (increase of $845 million), and new transactions not anticipated at the time of the August 2012 Business Plan forecast (increase of $2.2 billion). The forecast change in the total amount of authorizations in turn would affect Ex-Im’s forecast calculation of exposure. Using Ex-Im’s revised authorization estimate, the same model Ex-Im used to support its Business Plan forecast would now predict a reduction of $2.6 billion in exposure in 2013 and $1.6 billion in 2014. Ex-Im’s data on previous authorizations show that Ex-Im’s recent budget forecasts underestimated Ex-Im’s authorizations. Ex-Im’s 2012 budget estimate, submitted to Congress approximately 16 months before the end of that year, was 11 percent below the actual authorization figure. The 2012 estimate was closer to the actual authorization figures than Ex-Im’s forecasts in 2009, 2010, and 2011, which were between 33 and 42 percent below actual authorizations (see fig. 4). Ex-Im’s Business Plan notes that few could have predicted the financial crisis of 2007-2009, which led to a significant contraction in commercial lending and a sharp increase in demand for Ex-Im financing. Likewise, the European sovereign debt crisis led in 2011 to a continued need for Ex-Im financing at levels higher than originally estimated. Ex-Im officials asserted that their improved 2012 forecast shows they have begun to better account for the changed economic environment. However, any difference in the amount of authorizations also would affect the forecast amount of Ex-Im’s exposure. For example, Ex-Im’s 2013 and 2014 forecasts of exposure would increase if forecast authorizations were underestimated by the same 11 percentage points as for 2012. The same forecasting model Ex-Im used to support its Business Plan forecast would now predict Ex-Im’s exposure to be $2.2 billion higher at the end of 2013, and $5.9 billion higher at the end of 2014. The estimated total exposure at the end of 2014 would be $140.8 billion, greater than Ex-Im’s $140 billion exposure limit for 2014. Ex-Im prepared the Business Plan exposure forecast in August 2012 using the same model and assumptions about repayments that it had used in previous years. However, the model is sensitive to repayment assumptions and Ex-Im’s data no longer support the model’s assumption about the percentage of the portfolio that is short-term. To estimate the amount of repayments and cancellations that reduce Ex-Im exposure, Ex- Im made two key assumptions. Ex-Im assumed that 30 percent of authorizations each year were for short-term products that would be repaid within the year. Ex-Im assumed that the remaining nonshort-term authorizations would be repaid 10 percent at a time over 10 years. According to the Ex-Im staff who prepared the analysis, the 30 percent and 10-year assumptions were used in previous years and not revised for the Business Plan forecast. However, from 2002 through 2012, the actual percentage of Ex-Im authorizations that were short-term ranged from 24 to 37 percent, averaging 32 percent. These data were available to Ex-Im, but Ex-Im did not use them in its calculations. Furthermore, the percentage of Ex-Im’s portfolio that was short-term rapidly decreased in recent years—from 37 percent in 2010 to 31 percent in 2011 and to 25 percent in 2012. The data included in Ex-Im’s authorization forecast spreadsheet indicate that Ex-Im would calculate short-term percentages of 22 percent in 2013 and 23 percent in 2014. Using Ex-Im’s actual and forecast percentages of short-term authorizations in Ex-Im’s model results in a forecast of $123 billion in exposure for 2013 and $142 billion—in excess of the $140 billion exposure limit—for 2014. While Ex-Im assumes that nonshort-term exposures would be repaid over 10 years, the repayment terms for Ex-Im’s long-term products range from 7 to 18 years. Assuming a 9-year average repayment term decreases Ex- Im’s exposure by approximately $1 billion at the end of 2014. Assuming an 11-year average repayment term increases the estimate by approximately $1 billion. In combination, varying the model’s assumptions about the percentage of short-term authorizations in Ex-Im’s portfolio (using a 30 percent assumption or actual historical data) and average repayment terms (9 or 11 years) results in a range of 2014 exposure estimates between $132 billion and $144 billion (see fig. 5). Although the authorization forecast is uncertain and key assumptions about repayments affect the results, Ex-Im did not conduct sensitivity analyses to assess and report the range of various outcomes. In addition, Ex-Im did not update its model or reassess its process for estimating authorizations in light of previous underestimates. GAO guidance for estimating costs states that assumptions should be realistic, valid, and backed up by historical data to minimize uncertainty and risk. Further, forecast models should be assessed against historical experience to check their validity. In addition, a sensitivity assessment should be conducted for all estimates to examine the effect of changing assumptions, and this assessment should be documented and presented to management. As a result of not addressing the uncertainty of authorization estimates and assumptions in its forecast model, the range of uncertainty of its exposure forecast shows that Ex-Im could have to take actions such as postponing planned authorizations to avoid exceeding its exposure limit. Ex-Im’s support for its evaluation of risk of loss was limited in the Business Plan, with some forecast data not provided in the plan pending approval of key analyses by OMB. While Ex-Im concluded there would be no change to its risk of loss for its subportfolios by product type or relating to the small business, sub-Saharan Africa, and renewable energy mandates, it did not provide conclusions on the overall risk of loss or the risk of loss by industry or key market. Ex-Im also did not present data on historical performance in the Business Plan, although it reported performance data such as default rates in other reports. Additionally, Ex- Im does not routinely report the performance of its subportfolios relating to the small business, sub-Saharan Africa, and renewable energy mandates, although these mandates encourage Ex-Im to undertake transactions in these subportfolios and their performance differs from the overall Ex-Im portfolio. According to Ex-Im, the deadline for the Business Plan limited its ability to provide more detailed information on its projected risk of loss. The loss rates Ex-Im annually updates are key to its estimation of its risk of loss. OMB did not approve Ex-Im’s model that calculates these loss rates until September 24, 2012, 6 days before the plan’s mandated completion date of September 30, 2012. Instead of providing detailed information on its projected risk of loss, Ex-Im’s Business Plan described the components of its risk-management program (underwriting, monitoring, claims, recovery, and loss reserves) and discussed the two elements it used to assess risks (risk ratings and portfolio concentration). Ex-Im’s Business Plan stated that the risk rating element includes (1) the distribution of risks among transactions such as how many are low-, medium-, or high- risk; and (2) the individual transactions’ risk rating, which is the most relevant factor in predicting losses, according to the plan. Ex-Im’s Business Plan included four portfolio concentration measures—(1) the portfolio share of its top 10 countries, (2) the portfolio share of its top 10 obligors, (3) the distribution of its portfolio by geographic region, and (4) the distribution of its portfolio by industry. Ex-Im’s risk analysis in its Business Plan was limited because it did not provide a conclusion on the overall risk of loss, or risk of loss by industry or key market under the new exposure limit. While the plan provided historical data on overall risk rating and portfolio concentration in 2008 and 2012, such data did not reflect the projected changes of composition or the risks of Ex-Im’s subportfolios. Specifically, Ex-Im did not project the overall risk of loss under the new exposure limit in future years, but instead referred to historical data showing that the overall portfolio risk rating improved between 2008 and 2012. For example, the overall risk rating improved from 4.23 in 2008 to 3.85 in the third quarter of 2012 (on Ex-Im’s scale of 1-11, 1 is the least risky). Ex-Im did not project changes in industry concentration or provide a conclusion on how such changes would affect its risk of loss. Instead, Ex-Im presented a comparison of the industry distribution of Ex-Im’s portfolio in 2008 and 2012 and stated that the concentration in some industries increased from 2008 to 2012 while others decreased. For example, the aircraft industry marginally increased its share of the portfolio. Ex-Im also asserted that its loss estimation model accounted for such changes to determine the appropriate amount of loss reserves. Ex-Im did not provide information in the plan on projected changes in exposure composition by key market or a conclusion on how such changes would impact risk of loss. Instead, the plan discussed changes in portfolio concentration by regions, top 10 countries, and top 10 obligors between 2008 and 2012. The plan also compared Ex- Im’s portfolio distribution by region in 2008 and 2012, rather than by countries Ex-Im identified as key markets. Ex-Im did conclude in the Business Plan that it expected a favorable impact on risk of loss from changes in product mix as it expected its portfolio to shift towards long-term products, which have the lowest loss rates, according to the plan. However, Ex-Im did not provide information on the composition of exposure by product after this shift. Ex-Im concluded that its risk of loss associated with complying with the small business, sub-Saharan Africa, and renewable energy mandates under the new exposure limits would not increase. Specifically, Ex-Im concluded that there would be no increase to its risk of loss associated with complying with the small business mandate under the new exposure limit because a large share of Ex-Im’s small business transactions are short-term and highly diversified across industry sectors and geographic areas. In addition, Ex-Im shares the risks of some of these transactions with the originating banks and obtains collateral to secure the transactions. Ex-Im concluded that there would be no increase to its risk of loss associated with complying with the sub-Saharan Africa mandate under the new exposure limit. Ex-Im’s rationale was that it primarily engages with profitable companies in growing sectors and well- managed African governments. Ex-Im concluded that there would be no change to its risk of loss associated with complying with the renewable energy mandate. Ex- Im’s rationale was that its renewable energy transactions have default rates comparable to its long-term transactions, which have the lowest default rates, according to the plan. While Ex-Im’s strategic plan states that the bank uses default rates to measure risk of loss, the Business Plan did not present any historical default rate data on Ex-Im’s subportfolios. Again limited by its lack of final projected loss rates at the time of the Business Plan, Ex-Im did not present any projected loss data in the Business Plan—for example, the estimated credit subsidy costs of its portfolio in the future years—to support its conclusions. However, Ex-Im does report some financial data on historical performance in some of its existing reports, which provide some insight into potential losses. These data include default rates by subportfolio of product, key market, and industry; loss reserves and allowances; and overall weighted-average risk ratings. Examples of such reports include Ex-Im’s annual reports, audited financial statements, default rate reports, and internal portfolio status reports. To provide context for the Business Plan’s conclusions on risk of loss, we reviewed fiscal year-end financial data from Ex-Im’s active portfolio for 2008 and 2012. Using Ex-Im’s default rate methodology, we calculated the average default rates for 2008 and 2012 based on subportfolio-level data Ex-Im compiled at our request. Table 1 shows that the default rates of the subportfolios were generally lower than the overall default rate as of September 30, 2012, with the exception of the subportfolios of medium-term products and transactions with only small business participants. While Ex-Im’s average default rates overall and by subportfolio generally declined from 2008 to 2012, the declining trend may not be conclusive because Ex-Im’s portfolio at the end of 2012 contained a large volume of recent transactions that have not reached their peak default periods, as we recently reported. Recent transactions have had limited time to default and may not default until they are more seasoned. Further, Ex-Im does not retain point-in-time historical data on credit performance to allow it to compare defaults of recent and seasoned transactions at comparable points in time. We recently made a recommendation to address this weakness so that Ex-Im can conduct future analyses comparing the performance of its portfolio between years. Ex-Im concurred with this recommendation. While Ex-Im included an assessment of the risk of loss associated with implementing the three congressional mandates in its Business Plan as required by the Reauthorization Act, Ex-Im missed the opportunity to present any risk rating data to support its risk evaluations, though this was not required. Again limited by its lack of final projected loss rates— which are calculated using risk ratings of transactions as a key variable— at the time of the Business Plan, Ex-Im did not present any projected risk rating data in the plan. While the Business Plan did not include any risk rating data related to the three congressional mandates, to further examine Ex-Im’s conclusions on risk of loss associated with complying with the three mandates, we analyzed the weighted-average risk ratings for 2008 and 2012 related to these mandates as compiled by Ex-Im (see table 2). Our analysis shows that Ex-Im’s overall weighted-average risk rating declined between 2008 and 2012. However, transactions related to these three mandates generally had higher weighted-average risk ratings than the overall weighted-average risk ratings for both years, except for transactions that partially support small businesses. Ex-Im did not include risk ratings of transactions supporting the small business, sub-Saharan Africa, and renewable energy mandates in the Business Plan, and has not routinely reported the mandates’ performance (for example, default rates) at the subportfolio level. Ex-Im’s most recent strategic plan indicates that Ex-Im uses default rates as one of the metrics to measure risk performance. In addition, Ex-Im monitors default rates both internally and in quarterly default rate reports to Congress; however, Ex-Im does not include the default rates for transactions supporting these three congressional mandates in its reports. Ex-Im’s annual report documents the weighted-average risk rating of its overall portfolio, but does not provide further breakdown of the risk rating at the subportfolio level. Congress requires Ex-Im’s default rate reports to include default rates of its overall portfolio and by subportfolios of product type, industry sector, and key market. However, Ex-Im can analyze additional information about its subportfolios related to the three mandates. For example, according to Ex-Im, although it does not separately track the performance of the small business subportfolio, it tracks the performance of the working capital guarantee and short-term multibuyer insurance subportfolios, which are largely small business products and therefore serve as its proxy of the small business subportfolio. Similarly, Ex-Im does not track the performance of renewable energy transactions but has included them in the overall product category. Additionally, Ex-Im’s default rate report includes default rates broken out for countries in Africa, which can be used as a proxy for sub-Saharan Africa transactions. Our analysis indicates that the performance of the subportfolios related to the three congressional mandates can vary from that of the overall portfolio. For instance, the higher risk ratings of the subportfolios suggest these transactions generally are more risky than Ex-Im’s overall portfolio. Although it is not required by Congress, Ex-Im is able to report financial performance information on subportfolios supporting the three mandates, such as default rates and risk ratings. Because Ex-Im does not currently report financial performance data related to these mandates, Ex-Im officials explained that the agency specifically developed new analyses to address our data requests for default rates and weighted-average risk ratings at the subportfolio level. Congress directs that Ex-Im engage in transactions that support business activities fulfilling these three mandates while maintaining reasonable assurance of repayment. In addition, OMB guidance indicates that agencies should use comprehensive reports on the status of the credit financing portfolios to evaluate effectiveness and collect data for program performance measures such as default rates. Furthermore, federal banking regulator guidance suggests that banks should provide financial performance information by portfolio and specific product type to allow management to properly evaluate lending activities. For example, guidance from the Office of the Comptroller of the Currency and interagency guidance from federal banking regulators suggest that banks and other financial institutions should report performance information, such as default rates, loss severity, and delinquencies, and compare their performance with expected performance on an overall and subportfolio level. Financial performance information on Ex-Im’s subportfolio can help inform Ex-Im’s risk evaluation and risk-management activities. Moreover, reporting financial performance information would be consistent with federal internal control standards, which indicate that communications with external parties, including Congress, should provide information that helps them better understand the risks facing the agency. By not routinely analyzing and reporting performance information on these congressionally mandated transactions, Ex-Im limits its ability to internally evaluate the performance and default rates of transactions it is specifically mandated to maintain, which in turn hinders reporting of such performance to Congress. In the Business Plan, Ex-Im’s response to the reauthorization requirement to assess its resources was limited and further details were not included pending OMB review of Ex-Im’s 2014 budget request. From 2008 through 2012, Ex-Im experienced rapid growth in authorizations while its staff and administrative budget level remained relatively flat. The Business Plan reports that Ex-Im’s resources are strained and cannot sustain the bank’s current level of activity or meet expected demand in coming years. Although the Business Plan does not give specific details about the resources needed to manage Ex-Im’s growing authorizations, other bank documents outline estimated resource requirements in more detail. While Ex-Im’s support for small business has grown and Ex-Im forecasts continuing increases, Ex-Im’s mandated target will require it to increase small business authorizations by $2.4 billion (39 percent) between 2012 and 2014. The Business Plan reports that Ex-Im expects administrative resource constraints may prevent the bank from meeting its congressionally mandated target for small business export financing and lack of demand may prevent meeting the target for renewable energy export financing. The Business Plan states that recent growth has strained Ex-Im’s resources, particularly its underwriting and monitoring staff. Although the bank has been able to manage the growth through increased operating efficiencies, its current resources cannot sustain the level of activity expected in coming years. According to Ex-Im officials, although additional information was available, Ex-Im’s response regarding its resource needs was limited in the Business Plan because Ex-Im’s 2014 budget request had not yet been cleared by OMB at the time the plan was due to Congress. Ex-Im data presented in other documents demonstrate that while authorizations and exposure grew, its administrative budget and staff level remained relatively flat. From 2008 through 2012, Ex-Im’s annual authorizations grew nearly 150 percent. Its administrative budget increased 15 percent, from $78 million in 2008 to $90 million in 2012 (see fig. 6). Over the same period, Ex-Im’s staff level, as measured by full-time equivalents (FTE), increased less than 11 percent, from 352 in 2008 to 390 in 2012. In 2008, the ratio of authorizations to Ex-Im staff was $40.1 million per employee. In 2012, the ratio was $90.9 million per employee. Ex-Im has requested additional administrative funds in recent years, but has not received the full amount of its requests. According to Ex-Im officials, initially the increased business primarily affected Ex-Im’s underwriting function. However, as transactions complete the underwriting phase officials expect workloads to increase significantly in other areas, such as legal and monitoring. In March 2013, we reported that Ex-Im had taken steps to address workload challenges, but had not developed benchmarks for the level of business it can properly support with a given level of resources. We recommended that Ex-Im develop workload benchmarks, monitor workloads against these benchmarks, and develop controls to mitigate risk when workloads approach or exceed these benchmarks. Ex-Im concurred with our recommendation. Ex-Im does not track the time employees spend on particular tasks. Some Ex-Im divisions are primarily focused on specific transactions—such as small business or transportation—enabling Ex-Im to use the staff and administrative funds allotted to these divisions as a proxy indicator of the resources invested in these transactions. However, other Ex-Im divisions also devote resources to these transactions. For example, Ex-Im staff may spend time underwriting or monitoring various types of transactions in different portfolios. According to Ex-Im officials, systems that track costs more precisely are expensive to develop and require time-intensive data capture. Ex-Im was able to provide the number of direct FTEs that support some of its mandated activities, but did not quantify the FTEs supporting bankwide activities that also support the individual mandates. The Business Plan did not discuss the bank’s ability to conduct economic impact assessments, as specifically mentioned in the reauthorization requirement. According to Ex-Im officials, details of the resources required for economic impact assessments were not included in the plan because Ex-Im was reviewing its economic impact methodology and drafting new guidelines and procedures at the time the plan was issued. However, Ex-Im officials stated that they considered the resources needed to conduct these assessments in the Business Plan’s assessment of resource needs, particularly for underwriting. Congress requires Ex-Im to consider the economic impact of its work and not to fund activities that will adversely affect U.S. industry. Ex-Im tests for adverse affects by performing an economic impact analysis. As we previously reported, Ex-Im uses a screening process to identify projects with the most potential to have an adverse economic impact, and then subjects the identified projects to a detailed analysis. According to Ex-Im officials, the bank currently has three staff members conducting economic impact analyses and plans to hire an additional employee to assist with these analyses because Ex-Im expects to conduct more large transactions that will likely require more economic impact assessments. The Business Plan describes Ex-Im’s information technology (IT) systems as antiquated and inflexible, noting that some systems are more than 30 years old. The plan also states that Ex-Im has begun a Total Enterprise Modernization project to address its IT issues, but notes that continued progress is contingent upon adequate funding. In January 2012, Ex-Im’s Inspector General found that Ex-Im’s IT infrastructure made it difficult for the bank to provide timely service, effectively manage and track its programs, measure progress, and increase productivity. The Inspector General also found that Ex-Im did not have practices to effectively manage its strategic planning, coordinate initiatives, and determine the best use of funds for improving IT support of its mission. Ex-Im has been addressing the IT issues identified by the Inspector General. According to initial responses to the Inspector General, dated January 10, 2012, a series of processing system projects were underway. In addition, Ex-Im hired a contractor to evaluate its IT systems and provide recommendations. The contractor’s major recommendation was to replace Ex-Im’s financial management system. Ex-Im officials expect the new financial system be ready in October 2014. Ex-Im also has been consolidating different forms into a simplified online form that will guide applicants through the application process and allow them to sign forms, submit documents, and pay fees online. According to Ex-Im, a pilot form was demonstrated at Ex-Im’s annual conference in April 2013 but this project requires OMB approval, which Ex-Im expects by September 2013. Finally, Ex-Im has been updating its systems to assign each customer a unique identifier recognized across all systems. In its September 2012 update to the Inspector General on the status of IT improvements, Ex-Im projected full implementation by January 2013. However, in March 2013 Ex-Im told us that this upgrade was being tested and was expected to go into operation by September 2013. Congress has given Ex-Im explicit policy goals—which include specific targets for small business and environmentally beneficial exports—in addition to its general mandate to support domestic exports. Since the 1980s, Congress has required that Ex-Im make available a certain percentage of its export financing for small business. In 2002, Congress established several new requirements for Ex-Im relating to small business, including increasing the small business financing requirement from 10 to 20 percent of the total dollar value of Ex-Im’s annual authorizations. Related congressional directives have included requirements to create a small business division and define standards to measure the bank’s success in financing small businesses. Ex-Im’s support for small businesses has increased 92 percent over the past 5 years, from $3.2 billion in 2008 to $6.1 billion in 2012. However, these recent increases have not kept pace with the rising amount— caused by the increase in Ex-Im’s overall authorizations—needed to meet the 20 percent mandate. Ex-Im projects in its Business Plan that it will be challenged to meet the 20 percent mandate in 2013 or 2014 because the dollar amount of its overall growth will continue outpacing its small business activity. The 20 percent target equaled $4.9 billion in small business authorizations in 2010, the last year in which Ex-Im met the requirement. Based on Ex-Im’s projected authorizations, the 20 percent target will equal $8.5 billion in 2014. Therefore, to meet this mandate, Ex- Im will need to increase small business authorizations even further, by $3.6 billion (73 percent) in 4 years. This is also an increase of $2.4 billion (39 percent) from its 2012 small business authorizations (see fig. 7). Small business authorizations accounted for less than 20 percent of the dollar amount of Ex-Im’s total authorizations in 2011 and 2012. However, measured in number of transactions, 87 percent of all authorizations approved by Ex-Im since 2008 directly supported small business exports. Ex-Im expects to increase its small business authorizations by $1.4 billion (22 percent) to approximately $7.7 billion between 2013 and 2014. Ex-Im achieved a similar increase in 2011, but saw a more modest increase of 1.4 percent in 2012 and projects a 2.5 percent increase in 2013. According to the Business Plan’s forecast, Ex-Im expects its total authorizations to exceed $42 billion in 2014, which would raise its small business mandate to $8.5 billion. Even if Ex-Im’s small business authorizations increase as expected in 2014, the bank still would fall short of its mandated target by more than $800 million. In addition to the rising target amount, Ex-Im officials noted that limited resources will affect its ability to meet the small business mandate. Ex- Im’s 2013 Congressional Budget Justification stated that achieving its forecast increase in small business transactions was contingent on an additional $14 million for administrative expenses. Ex-Im planned to use $7 million of the additional administrative funds it requested to support small business outreach and underwriting abilities. However, Ex-Im did not receive this increase. According to Ex-Im officials, processing small business transactions and bringing in new small business customers is resource intensive. Originating, underwriting, and servicing for small business deals requires more effort than other transactions because small businesses tend to have less exporting experience than larger businesses. Ex-Im’s Business Plan notes that small business transactions were approximately $1.8 million on average but required more of Ex-Im’s resources than other transactions. For each $1 billion of nonsmall-business authorizations—an amount sometimes achieved with a single Project Finance transaction— Ex-Im must generate $200 million in small business authorizations (about 122 transactions) to meet its small business mandate. According to Ex-Im officials, 65 of its 390 FTEs are in the Small Business Group and directly support the bank’s efforts to meet its small business mandate target. Six additional FTEs from other divisions devote 50 percent of their time to small business transactions. Ex-Im also recently launched several new small business products and opened four new regional offices to support small business exporters. The Business Plan states that Ex-Im has about 25 field staff in 13 offices to support small businesses. Ex-Im also started a series of small business forums and webinars to assist exporters in understanding how the bank’s various products could help increase sales. Small business transactions are also supported by dedicated IT resources. For example, Ex-Im has added a small-business portal to its website, which includes step-by-step assistance to exporters, videos, stories about the success of other exporters, and contact information for nearby Ex-Im Export finance managers. Since 1992, Congress has directed Ex-Im to report on its financing of environmentally beneficial exports. In recent years, Congress has provided a 10 percent financing target for environmentally beneficial exports, and in 2009 it directed that the target be specifically for two subcategories of environmentally beneficial exports—renewable energy or energy efficient end-use technologies. Despite a recent increase in its renewable energy authorizations, Ex-Im’s Business Plan indicates that it does not anticipate sufficient market demand to allow the bank to provide enough renewable energy authorizations to meet the target of 10 percent of its overall authorizations and still meet its requirement for reasonable assurance of repayment. Ex- Im’s support for renewable energy exports grew from $30 million in 2008 to $721 million in 2011 and is forecast to reach $1.1 billion in 2014. Although Ex-Im’s renewable energy authorizations generally increased since 2008, they have remained less than 3 percent of Ex-Im’s overall authorizations. Based on Ex-Im’s projected total authorizations for 2013 and 2014, Ex-Im would have to authorize $3.8 billion in renewable energy financing in 2013 and $4.3 billion in 2014 to meet the 10 percent target (see fig. 8). Ex-Im officials stated that additional administrative resources would not enable it to meet its renewable energy target, as its inability to meet the target results from a lack of demand for renewable energy export financing. Seven bank employees are directly involved in meeting Ex-Im’s renewable energy target, six in the Office of Renewable Energy and one in the Structured Finance Group. However, Ex-Im officials noted that a 2010 Department of Commerce report estimated the value of all U.S. renewable energy exports at $2 billion in 2009. Thus, if the bank had financed every U.S. renewable energy export that year, it still could not have met its renewable energy target. For both small business and renewable energy transactions, the mandated authorization target is tied to total authorizations, which increase or decrease based on factors unrelated to Ex-Im’s performance in support of small business or renewable energy. OMB guidance directs agency leaders to set ambitious, yet realistic goals that reflect careful analysis of associated challenges and the agency’s capacity and priorities. Communicating this information to external stakeholders, such as Congress, that may have a significant impact on the agency achieving its goals is also consistent with federal internal control standards. In addition to resources supporting renewable energy transactions, Ex-Im devotes resources to implementing its carbon policy, which was put in place in 2010, and developed in response to a lawsuit challenging Ex- Im’s compliance with provisions of the National Environmental Policy Act. The carbon policy (1) promotes renewable energy exports where carbon dioxide emission levels are very low to zero, (2) establishes a $250 million facility to promote renewable energy, and (3) calls for increased transparency in the tracking and reporting of carbon dioxide emissions. Although Ex-Im’s carbon policy was not mandated by Congress, the Business Plan notes that 2012 appropriations language requires Ex-Im to notify Congress of projects that will generate more greenhouse gases than bank-supported projects generated on average during the preceding 3 years. The Business Plan also states that Ex-Im may exceed this threshold as its level of activity increases. Ex-Im has three environmental engineers who directly support compliance with the carbon policy. Additionally, the vice president of Ex-Im’s Environmental and Engineering Division and another employee responsible for legal policy spend 20 and 50 percent of their time, respectively, on carbon policy-related activities. The sub-Saharan Africa mandate does not have quantifiable targets. This mandate requires Ex-Im, in consultation with the Secretary of Commerce and the Trade Promotion Coordinating Committee, to promote the expansion of its financial commitments in sub-Saharan Africa, establish an advisory committee to assist with the implementation of policies and programs to support this expansion, and report to Congress on efforts to improve relations with relevant regional institutions and coordinate with U.S. agencies pursuant to the African Growth and Opportunity Act. Two employees from Ex-Im’s Office of African Development are directly involved in meeting the requirements of the sub-Saharan Africa mandate and half of the duties of an Ex-Im vice chairman are also related to this mandate. Ex-Im reports that it has met the requirements of this mandate and expects to continue to meet this mandate. Ex-Im’s efforts to meet this mandate include: establishing an advisory committee to assist the Board of Directors in meeting Ex-Im’s sub-Saharan Africa mandate; and creating a $100 million Africa Initiative to make insurance available for exports to sub-Saharan African countries that otherwise would not be eligible for Ex- Im support. From 2008 to 2012, Ex-Im’s authorizations supporting the sub-Saharan Africa mandate increased from $575.5 million to $1.5 billion, and are projected to decline to about $1 billion in 2013 before increasing again to approximately $1.8 billion in 2014. Ex-Im has experienced enormous growth in its authorizations and exposure in recent years, challenging its ability to plan for and manage its portfolio. While Ex-Im may not have been able to anticipate the effect of events like the 2007-2009 financial crisis on its portfolio, the bank also has not reacted to the changed environment and taken steps to account for the uncertainty of its authorization forecasts and reassess its exposure forecast model and assumptions. These assumptions and forecasts should be supported by historical data and experience. In addition, a sensitivity assessment of the effect of these assumptions should be presented to management. Furthermore, Ex-Im is a demand-driven institution, but Congress has placed specific requirements on the bank’s portfolio to support small business, sub-Saharan Africa, and renewable energy. The risk profile of transactions supporting the three mandates differs from the bank’s overall risk profile, but Ex-Im has not routinely documented the risk effect of these mandates for its own management or for Congress. Reporting such information would be consistent with OMB and federal banking regulator guidance as well as federal internal control standards. In addition, the Reauthorization Act and appropriations language reflect important national priorities and congressional interest in supporting small businesses and promoting renewable energy. However, because these requirements are linked directly to the bank’s total authorizations, the targets are volatile—subject to fluctuation caused by changes in overall demand for export financing. Recently, the bank’s growth has created growing targets that could lead the bank to devote an increasing portion of its limited staff and resources to activities that are particularly time- and resource-intensive, such as small business authorizations, or set goals that may not be achievable in the current market, such as providing a set amount of renewable energy financing that is higher than the demand. OMB criteria indicate that agency targets should be ambitious, yet realistic, and reflect careful analysis, factors affecting outcomes, and agency capacity and priorities. It is important to communicate the effect of these mandated targets on Ex-Im operations to external stakeholders, such as Congress, and the potential impacts percentage-based targets may have on the agency’s resources and ability to achieve its goals. To provide Congress with the appropriate information necessary to make decisions on Ex-Im’s exposure limits and targets, we recommend that the Chairman of the Export-Import Bank of the United States take the following four actions: To improve the accuracy of its forecasts of exposure and authorizations, Ex-Im should compare previous forecasts and key assumptions to actual results and adjust its forecast models to incorporate previous experience; and assess the sensitivity of the exposure forecast model to key assumptions and authorization estimates and identify and report the range of forecasts based on this analysis. To help Congress and Ex-Im management understand the performance and risk associated with its subportfolios of transactions supporting the small business, sub-Saharan Africa, and renewable energy mandates, Ex-Im should routinely report financial performance information, including the default rate and risk rating, of these transactions at the subportfolio level. To better inform Congress of the issues associated with meeting each of the bank’s percentage-based mandated targets, Ex-Im should provide Congress with additional information on the resources associated with meeting the mandated targets. We provided a draft of this report to Ex-Im for comment. Ex-Im concurred with all of our recommendations, and stated that it would incorporate our recommendations into preparation of subsequent reports for Congress. Ex-Im further clarified that it would never exceed the exposure limit set by Congress. Ex-Im stated that it monitors exposure on a monthly basis and if necessary on a daily basis and would put in place the necessary processes and procedures to prevent exceeding the limit. We did not intend to imply that Ex-Im would exceed its limit, but rather that not accounting for forecast uncertainty could lead to Ex-Im having to take such steps to avoid exceeding the limit. We slightly modified the language in the summary of our key findings to clarify this point. We are sending copies of this report to appropriate congressional committees and the Chairman of the U.S. Export-Import Bank. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4802 or evansl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to examine the extent to which the Business Plan and analyses of the Export-Import Bank (Ex-Im): (1) justify bank exposure limits; (2) evaluate Ex-Im’s risk of loss associated with the increased exposure limit, the changing composition of exposure, and compliance with congressional mandates; and (3) analyze the adequacy of Ex-Im resources to manage authorizations and comply with congressional mandates under the proposed exposure limits. For all objectives, we reviewed and analyzed Ex-Im’s response in the Business Plan. To assess the extent to which Ex-Im’s Business Plan and analyses justify exposure limits, we reviewed the spreadsheet model Ex-Im used to forecast exposure, and the source data on authorizations and repayments Ex-Im entered into the model. We met initially with Ex-Im staff who prepared the spreadsheet model to review the Ex-Im spreadsheet to understand its structure and formulas. We then received a copy of the model and reviewed it independently. Following our independent review, we met a second time to discuss more detailed questions about the structure, data, and assumptions contained in the model. To assess the reliability of the exposure model, we compared its August 2012 projections of what exposure would be at the end of September 2012 with the actual results in Ex-Im’s annual report. To understand the development of the source data on authorizations used in the model, we met individually with Ex-Im officials from its various business units who prepared the estimates. To assess Ex-Im’s methods and data in follow-up to these meetings, we requested and reviewed additional written detail on the methodology used for the authorization estimates and source data for individual estimates of long-term authorizations. We reviewed these source data to determine the forecast timing and average size of the estimates, and checked the forecast authorization size against the actual authorization size for authorizations that occurred through March 2013. To assess the performance of Ex-Im’s authorization forecast procedures, we compared previous years’ projections with actual results. We additionally reviewed Ex-Im’s revised authorization estimates, compared the original and revised estimates, and assessed the effect of the revised estimates on Ex-Im’s exposure projection by inputting the revised authorization estimates into Ex-Im’s spreadsheet model. To assess Ex- Im’s forecast of repayments, we compared the assumption Ex-Im used in the spreadsheet to previous data on the short-term percentage of the Ex- Im portfolio. We then calculated Ex-Im’s exposure under alternative scenarios based on these previous actual percentages and alternative assumptions about repayment terms. Finally, we assessed the procedures and assumptions Ex-Im used in its Business Plan forecast of exposure against GAO criteria for developing estimates. To assess the extent to which Ex-Im’s Business Plan and analyses evaluate the risk of loss associated with Ex-Im’s increased exposure limit, the changing composition of exposure, and compliance with congressional mandates, we reviewed agency data and documentation— including Ex-Im’s financial performance data, annual reports, and quarterly default rate reports. We also reviewed relevant GAO and Ex-Im Inspector General reports and interviewed Ex-Im officials responsible for risk evaluation. To further examine Ex-Im’s risk of loss evaluation in the Business Plan, we examined weighted-average risk ratings from fiscal years 2008 to 2012 that Ex-Im compiled at our request for subportfolios supporting congressional small business, sub-Saharan Africa, and renewable energy mandates. We compared these subportfolio risk ratings to Ex-Im’s overall portfolio risk ratings for 2008 and 2012. In addition, we examined default rate data compiled at our request by Ex-Im for these subportfolios and calculated fiscal year-end default rates for Ex-Im’s subportfolios for 2008 and 2012. We compared these default rate data to Ex-Im’s overall portfolio default rate for 2008 and 2012. To assess the reliability of these data, we reviewed and checked them against previous Ex-Im reporting. Additionally, we consulted the data review prepared for another recent GAO report on Ex-Im. We found the data to be sufficiently reliable for the purposes of providing context for the financial performance of overall portfolio and subportfolios in each fiscal year. To evaluate Ex- Im’s risk management, we compared its risk management and analysis practices against federal banking regulator guidance on financial performance reporting, Office of Management and Budget guidance on federal credit programs, and our standards for internal control. To assess the extent to which Ex-Im’s Business Plan and analyses analyze the adequacy of Ex-Im resources to manage authorizations and comply with congressional mandates under the proposed exposure limits, we reviewed Ex-Im responses to previous GAO and Inspector General audit reports. We also reviewed relevant Ex-Im documents, including the Ex-Im Charter, 2010-2015 Strategic Plan, Small Business Reports, Government Performance and Results Act Performance Reports, Ex-Im’s carbon policy and environmental procedures, Ex-Im’s economic impact procedures and methodological guidelines, Congressional Budget Justifications, annual reports, 2009-2012 Human Capital Plan, draft 2013- 2015 Human Capital Plan, and Ex-Im’s workforce and full-time equivalent data. To assess the reliability of these data, we reviewed and checked them against previous Ex-Im reporting. Additionally, we consulted the data review prepared for another recent GAO report on Ex-Im. We found these data to be sufficiently reliable for the purposes of describing the growth of Ex-Im’s business, size of its workforce, and amount of administrative funds requested and received from Congress. We also reviewed relevant GAO, Congressional Research Service, and Ex-Im Inspector General reports and met with officials from Ex-Im and Ex-Im’s Office of Inspector General. We compared Ex-Im’s planning documents against criteria established by GAO, the Office of Personnel Management, and the Office of Management and Budget. We conducted this performance audit from November 2012 to May 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Juan Gobel, Assistant Director; Joshua Akery; Anna Chung; Martin De Alteriis; Risto Laboski; Grace Lui; Yesook Merrill; Barbara Roesmann; and Michael Simon made key contributions to this report. Jena Sinkfield provided technical assistance. | Ex-Im helps U.S. firms export goods and services by providing a range of financial products. Following the 2007-2009 financial crisis, increased demand resulted in rapid increases in Ex-Im's portfolio and exposure. The Export-Import Bank Reauthorization Act of 2012 reauthorized Ex-Im through fiscal year 2014 and, as a condition of raising Ex-Im's exposure limit in 2013, required Ex-Im to prepare a report with a business plan and analyses of key operational elements. The act also directed GAO to analyze the Business Plan. This report discusses the extent to which Ex-Im's Business Plan and analyses (1) justify bank exposure limits; (2) evaluate the risk of loss associated with the increased exposure limit, changing composition of exposure, and compliance with congressional mandates; and (3) analyze the adequacy of Ex-Im resources to manage authorizations and comply with congressional mandates. GAO reviewed Ex-Im's Business Plan, analyses, and other reports, and interviewed Ex-Im officials. While the Export-Import Bank (Ex-Im) Business Plan reported that Ex-Im's exposure limits were appropriate, the forecasting process used to reach this conclusion has weaknesses. Congress increased the Ex-Im exposure limit--the limit on Ex-Im's total aggregate outstanding amount of financing--to $120 billion in 2012, with provisions for additional increases to $130 billion in 2013 and $140 billion in 2014. Although Ex-Im's forecast model is sensitive to key assumptions, GAO found that Ex-Im did not reassess these assumptions to reflect changing conditions or conduct sensitivity analyses to assess and report the range of potential outcomes. GAO used historical data in lieu of these assumptions and found that Ex-Im's forecast of exposure could be higher than the limit set by Congress for 2014. GAO's cost guidance calls for agencies' assumptions and forecasts to be supported by historical data and experience, and a sensitivity analysis, which can assess the effect of changes in assumptions. Because Ex-Im has not taken these steps, the reliability of its forecasts is diminished. This is of particular concern because Ex-Im projects that its outstanding financing in the future will be closer to its exposure limit than it has been historically. Consequently, any forecast errors could result in the bank having to take actions, such as delaying financing for creditworthy projects, to avoid exceeding its limit. The Business Plan provided limited analysis of Ex-Im's risk of loss. First, Ex-Im did not provide some forecast data because of pending Office of Management and Budget (OMB) approval of key analyses. For example, Ex-Im did not include conclusions on Ex-Im's overall risk of loss and risk by industry. Second, Ex-Im included only limited analysis to support its conclusions that changes in its portfolio--including subportfolios of transactions supporting congressional mandates for small business, sub-Saharan Africa, and renewable energy--would not affect its risk of loss. In addition, Ex-Im has not routinely analyzed or reported the risk rating and default rate of subportfolios that respond to these mandates, although their performance may differ from the overall portfolio. OMB and banking regulator guidance call for entities, including federal agencies, to be able to provide comprehensive information by subportfolio, product, and other financial performance metrics. By not routinely analyzing and reporting financial performance for mandated transactions, Ex-Im decreases its ability to evaluate such performance at the subportfolio level and inform Congress of related risks. The Business Plan provided limited analysis of the adequacy of Ex-Im's resources and ability to meet congressional mandates. From 2008 through 2012, Ex-Im's administrative resources remained relatively flat as its portfolio grew. Ex-Im does not expect to meet its small business or renewable energy mandate targets in 2013 or 2014. These mandate targets are fixed to a percentage of the dollar value of Ex-Im's total authorizations. Although Ex-Im has dedicated resources to support these mandates, as Ex-Im authorizations have grown, the growth in mandate targets has outpaced Ex-Im's increasing support. Ex-Im projects that the targets will continue to outpace its growth in support through 2014. Mandate transactions also are resource-intensive and Ex-Im's ability to expand its renewable energy portfolio may be constrained by the size of the overall market. Communicating the effect of percentage-based targets on Ex-Im's resources and ability to achieve its goals to external stakeholders, such as Congress, is consistent with federal internal control standards. Ex-Im should (1) adjust its forecasting model based on previous experience, (2) assess and report the sensitivity of the exposure forecast model to key assumptions and estimates, (3) routinely report the financial performance of subportfolios supporting congressional mandates, and (4) provide Congress with additional information on the resources associated with meeting mandated targets. Ex-Im concurred with our recommendations. |
Section 108 of ATSA (Pub. L. No. 107-71) required TSA to establish a program permitting the more than 400 commercial airports using federal passenger and checked baggage screeners to apply to use private, rather than federal screeners. Beginning on November 19, 2004, all commercial airports with federal security screening will be eligible to apply to opt-out of using federal screeners. In addition to assessing airport applications for using private screeners, TSA will select qualified private screening contractors to conduct screening (including airports that seek to apply to be private contractor screening companies), which meets ATSA’s and TSA’s requirements. Nongovernmental employees, such as airport directors or their representatives, would be able to participate as advisors in the selection process, as long as the airport is not participating as a qualified screening company. ATSA requires private screeners selected to handle screening to meet the same hiring and training qualifications as federal screeners. ATSA also mandated that the private screeners’ pay and benefit levels should be not less than their federal counterparts. Furthermore, ATSA required that the contractor companies be owned and controlled by U.S. citizens. ATSA also required TSA to establish a 2-year pilot program using qualified private contractors to screen passengers and checked baggage at not more than one airport from each security category. TSA selected five airports to participate in the pilot program. The private passenger and checked baggage screening contractors selected for the pilot program had to comply with federal passenger and checked baggage screening standard operating procedures. In order to obtain an objective evaluation of the pilot program, TSA retained the services of an independent consulting firm. The consultants were charged with developing an evaluation methodology; conducting performance evaluations and comparisons between the five participating pilot program airports and federally screened airports; and developing a process to help TSA determine if a private screening company can meet ATSA’s performance standard for the opt-out program, which is to provide a level of screening services and protection equal to or better than that provided by federal screeners. The consultants found that, in general, the private screening contractors met ATSA’s performance standard. However, the consultants cautioned that the findings must be viewed in light of several key factors. For example, the consultants reported that the small number of pilot airports seriously limited the program’s usefulness as a true scientific pilot and the ability of the findings to be generalizable to apply to future privately screened airports. Additionally, the performance data available for review and analysis were limited, according to the consultants. Outside of the pilot program, any commercial airport may apply to opt out beginning on November 19, 2004. In accordance with ATSA, an airport operator may submit to TSA an application to have the screening of passengers and checked baggage at an airport be carried out by the screening personnel of a qualified private screening company, under a contract entered into with TSA. TSA will make the final approval of any application submitted and reserves the right to consider airport-specific threat intelligence and an airport’s record of compliance with security regulations and security requirements to determine the timing of the transition to private screening. TSA may also impose a delay on when an airport can transition to private screening based on such factors as peak travel season and the total cost of providing screening services at an airport. The five airports selected to participate in the pilot program have decided to continue using private screeners and will not have to apply for the opt-out program beginning in November 2004. TSA created a program office in October 2004 to provide financial oversight, ongoing operational support, communications, and transition planning for airports that apply to opt out of using federal screeners. This office was allocated 12 full-time-equivalent staff and, as of November 4, 2004, had 10 full-time staff on board. TSA plans to fund the opt-out program from the same budget line items as federal screening operations in order to provide flexibility on the number of airports that can participate in the program. The costs for contracts with private screening companies are to be funded by the cost of the federal operations that are being displaced. The conference report accompanying the fiscal year 2005 Department of Homeland Security Appropriations Act (H.R. Conf. Rep. No. 108-774) allocated appropriations of $2.424 billion to cover personnel, compensation, and benefits for passenger and checked baggage screeners and to cover screeners at the five pilot airports. This allocation represents an increase of more than $200 million over the fiscal year 2004 enacted level of $2.2 billion. According to TSA, this increase is necessary to fully fund compensation and benefits at the identified staffing levels. The requested level of funding will support screener salaries and management at all commercial airports, whether federalized or privatized. Also, the conference report allocated just under $130 million for the five pilot airports, an increase from $119 million in fiscal year 2004. This funding is based on an estimate of resources necessary to maintain screening at the current five pilot airports. TSA issued its first written guidance for the opt-out program in June 2004, in an effort to provide airport operators and the aviation community with information to gauge their level of interest in applying to participate in the program. The guidance, posted on TSA’s Web site and distributed to airport associations, provides information in three broad areas: legislative requirements, program planning approach, and guidance on key issues. A summary of the guidance follows. TSA’s guidance addresses ATSA legislative requirements, including how applications from qualified private screening contractors would be approved by applying the statutory standards of ATSA, including the level of screening services and protection provided at the airport, and that private contractor companies be owned and controlled by U.S. citizens. TSA’s guidance also included information for airports on how TSA’s screening pilot program was conducted, how the pilot program was evaluated by a private consulting organization, and suggested program improvements made by the consultants. The guidance discussed program structure, including costs. The guidance notes, for instance, that federal and contract screeners would be funded from the same pool of money and that costs for airports enrolled in the program would be determined based on current federal screener operations and TSA’s activity-based cost studies that estimate costs of running an airport’s screening operations. TSA’s guidance states that these cost studies would ensure that proposed costs by potential contractors are in line with federal estimates. The guidance also states that airport security screeners do not have the right to strike, whether TSA or a private contractor employs them. ATSA directs TSA to receive applications from airports intending to apply to opt out beginning November 19, 2004. The guidance states that TSA intends to close the application period within 3 weeks of that time and will re-open the application cycle in November 2005. The guidance states that because ATSA does not identify specific criteria by which TSA is to evaluate the airport application, TSA is currently developing and reviewing potential specific criteria for determining which airports will be approved and the sequence for their transition from federal to private screeners. The guidance notes that TSA reserves the right to consider participation in the program in light of an airport’s record of compliance on security regulations and requirements. TSA outlines three steps related to selecting contractors to perform screening services: (1) TSA submits requests for information to the aviation industry requesting input on acquisition issues, qualification criteria, and information contractors would need as part of a proposal process; (2) TSA develops a qualified vendors list to facilitate the vendor selection process; and (3) TSA selects a private contractor to provide screening services in airports selected for the screening partnership program. The guidance describes the roles and responsibilities of all major stakeholders, including airport directors and federal security directors (FSDs). TSA envisions both FSDs and airport authority as having “important roles” in the selection of the private contractor. The technical aspects of each private contractor’s screener contract will be managed locally by the FSDs. TSA will set performance measurement standards and each contractor will implement the standards. TSA seeks to give private contractors “a significant amount” of operational control in such areas as assessment and screener technical training, scheduling, recurrent training, and administrative functions. TSA said it would take “necessary steps” to enable FSDs and private contractors to implement operational flexibilities, including conducting recruiting, assessment, and screener technical training at the local level while ensuring that national standards are met and within TSA’s parameters. TSA is also in the process of developing a performance measurement approach for the opt-out program and contractors. TSA is considering several types of performance measures related to security effectiveness, customer service, and cost. Specific measures and baselines have not been finalized. In late October 2004, TSA finalized and publicly released guidelines providing criteria for determining how and when private screening contractors will be evaluated and selected to participate in the opt-out program. Any contractor that meets TSA and ATSA criteria (such as owned and controlled by U.S. citizens) may apply to the program. While TSA does not require companies seeking to become screening contractors to have prior experience in the business, such experience is preferred. Because TSA does not know how many airports will apply to participate in the opt-out program in 2004, the agency cannot yet determine how many private contractors may be hired to perform screening services. The contractor-selection process involves three phases, to be completed between November 2004 and May 2005. The phases and key milestones included in the guidelines are as follows: Phase I: Develop qualified offeror list (November 2004-January 2005). Offerors seeking to be pre-qualified by TSA as potential contractors must meet TSA and ATSA requirements (including owned and controlled by U.S. citizens and ability to provide screening services at a level equal to or greater than that provided by the federal government). Offerors must agree to provide compensation and benefits at a level that is not less than that provided by the federal government to the federal screener workforce. Offerors must agree to abide by TSA’s workforce transition rules, including compliance with priority employment rules for TSA’s federal employees displaced by privatization. Phase II: Develop qualified vendor list (February 2005). TSA issues request for proposal to contractors pre-qualified under Phase I. TSA will develop a qualified vendor list based on this population. Offerors will be required to present technical and cost capabilities. Qualified offerors will be selected based on ability to provide service in a given geographical region. Contractor proposals will not be evaluated until TSA determines how many airports have applied to the opt-out program. Phase III: TSA awards contracts to private screening contractors (May 2005). TSA will award competitive task orders, or contracts, based on cost/price analysis, among other things. TSA reserves the right to proceed with other alternatives for contractor selection as appropriate. TSA initiated Phase I on November 5, 2004, by posting a presolicitation notice on www.fedbizopps.gov, in accordance with standard government contracting practices. Contractors have until November 29, 2004, to provide the information, such as financial capabilities, requested in the presolicitation. Concurrently, TSA publicly released a presolicitation synopsis for the opt-out program that supplements the October guidelines on contractor criteria. This document provides additional evaluation criteria on screener compensation and benefits, hiring preferences for displaced government employees, financial capabilities of contractors, and other areas. Five of six private screening contractors we interviewed prior to TSA’s release of the presolicitation synopsis identified that they wanted more information on workforce transition rules, which govern how federal screeners displaced by private screening contractors should be dealt with. The October guidelines also identified that contractors must abide by TSA’s workforce transition rules—but do not specify what the rules are. The presolicitation synopsis states that federal screeners must be given hiring preference, but no additional information is provided. All six private screening contractors said that they did not know how TSA will make its determination about whether the level of screening services and protection provided at the airport under the contract will be equal to or greater than the level that would be provided at the airport by the federal government. TSA officials said they would use the information provided by contractors in response to the presolicitation notice to make this determination. In addition to the guidelines on criteria and the presolicitation synopsis, TSA prepared a draft statement of work for the contractors that participated in the opt-out pilot program and are continuing to provide screening services. This document describes technical requirements that private screening contractors must meet for performing screening operations, support, and administration. According to TSA, the draft statement of work is meant to give potential private contractors an idea of the service requirements in the private screening pilot program, which are likely to be similar to those for the opt-out program. TSA posted the document on its opt-out Web site in early November 2004. The contractors we interviewed also sought information on whether contractors will be able to use the same companies TSA has relied on to assess screener candidates and conduct initial screener training. The draft statement of work prepared for contractors that participated in the pilot program addresses this issue and has been posted on TSA’s Web site for all interested parties to review. Finally, private screening contractors told us they do not have an industry trade association through which TSA can channel information about its program activities. Five of six private screening contractors we interviewed cited a need for a more direct line of communication between TSA and their organizations beyond TSA’s Web site. For example, some contractors suggested that TSA sponsor a forum specifically to address contractors’ issues about the opt-out program. In addition, it was suggested that TSA appoint a liaison, or point of contact, to help ensure that information is communicated to contractors in a timely fashion. TSA developed draft procedures to document how opt-out program applications will be processed. These draft procedures—for internal use only—describe opt-out program parameters, a descriptive narrative of the application process, a matrix for each step in the process, an application template, sample notification letter template, an application checklist, and an application processing system template. TSA officials said that this approach is under review and expects the application procedures to be finalized in November 2004. TSA developed a draft activity-based cost study, which is also an internal document, not intended for public release. TSA plans to use the results of the study to explain how TSA will determine the unit cost of screening passengers in terms of the activities performed, review screening costs at both privately screened and federally screened airports in order to identify key cost drivers and best practices, and develop an efficient and repeatable data collection method for future studies. This study is currently under review within TSA. TSA expects to finalize this study later in November 2004. In addition, TSA developed a draft transition plan, which will serve as internal guidance for the agency and is not intended for public release. This is to be an operations plan designed to support TSA’s efforts to transition airports from federal to private screeners. The plan is to address, for example, TSA’s approach to giving federal screeners priority for employment with private screening contractors. In addition, this plan is to describe the roles and responsibilities of the TSA opt-out program office, FSD staff, and private screening contractors. Activities to be addressed in the plan include human resources, training, communication, logistics, performance measurement, and field support. TSA expects to refine the draft guide in December 2004 and plans to revise the plan on an ongoing basis as it gains more experience with the transition to private screening. A draft communications plan is also under development and remains in draft form. The purpose of this internal document is to reflect a strategy for communicating key program events and developments to both internal and external stakeholders. TSA told us the plan will contain information on roles and responsibilities of key stakeholders, among other things. The draft is undergoing final review within TSA. TSA has not set a date for finalizing this plan. In addition to the June 2004 initial guidance, statement of work for the pilot program contractors, and presolicitation synopsis, TSA has finalized other informational guidance for airport operators and private screening contractors. In October 2004, TSA posted the final version of its opt-out application form for airports seeking to apply. TSA will use the application to collect information on potential airports’ intentions regarding opting out. For instance, in addition to asking airports to provide basic information, such as point of contact, TSA seeks to learn whether airports want to be the qualifying contractor performing the screening services. TSA also asks airport officials to provide information on the airport authority’s primary reason for wanting to participate in the opt-out program, whether the airport has a preferred timeline for when the transition to private screening should occur, and to list scheduled activities that could interfere with the transition, such as peak travel season and major construction. TSA originally set a 3-week application window, from November 19, 2004, to December 10, 2004, for accepting application from airports interested in opting out of using federal screeners. However, based on input from stakeholders, TSA decided less than 1 month before the application cycle was to begin to extend the application deadline. As of November 15, 2004, TSA had not established a final deadline. Also, in October, TSA created an e-mail address to enable interested parties to submit questions and request additional information about how the opt-out program would be implemented. The goal of this effort was to provide supplemental guidance for stakeholders that reflected issues they wanted to know more about. A TSA official said the agency received approximately 100 e-mails between late August and early November. Based in part on these e-mails, in early September 2004, TSA developed and posted on its Web site an initial list of responses to frequently asked questions (FAQ). In early November 2004, TSA updated the FAQs to reflect three separate topics: questions for the overall program, questions about the airport application process, and questions about the contracting process. TSA plans to continue to update these lists as needed. Some of the information contained in the FAQs is new—that is, it is not addressed in the June 2004 written guidance. Some of TSA’s FAQ responses, however, restate the information in the written guidance, without additional elaboration. While these guidance documents have provided airport operators with information on the basic parameters and legislative requirements of the opt-out program, some airport operators, private screening contractors, and airport industry representatives told us TSA has not yet addressed all of their questions and concerns. The information that stakeholders told TSA and us that they needed falls into three categories: operational flexibility with respect to how much leeway airports and private screening contractors would have to manage the program; liability protection in the event that a screener fails to detect a threat object; and costs related to participating in the opt-out program. Eight of the 20 airport operators we interviewed who said they would not apply to opt out of using federal screeners in 2004 said that they needed additional information about the range of flexibility private screening contractors would be provided in terms of, for instance, their ability to deploy screeners where they are needed most at a given airport. Some airport industry representatives we spoke with raised operational flexibility as a concern as well. All of the private screening contractors we interviewed also said that they needed additional information on operational flexibilities, including whether contractors could collaborate with airport management without having to involve TSA directly; whether they would have the flexibility to determine appropriate screener staffing levels at their airports and to assess screener candidates and hire screeners on an as needed basis; and whether they could develop and/or deliver screener training. Among the questions e-mailed to TSA directly, some address operational flexibilities, such as screener staffing levels. For example, one of the questions pertained to whether an airport would be provided the same number of contract screeners as are currently authorized under the federal screener program. TSA responded in its FAQs that a qualified private contractor will determine the number of contract screeners needed and that TSA will provide guidelines for the contractors. TSA further noted that it is seeking to provide flexibility to the contractors to manage the operations as efficiently as possible and will look to them to identify possible efficiencies in areas such as scheduling and use of part-time employees as appropriate for the local airport. The independent consulting firm under contract to TSA suggested that TSA allow contractors serving the opt-out pilot program airports to assess screener candidates and conduct screener training as a means of allowing greater operational flexibilities. The consultants stated that greater operational flexibilities would enable a more robust comparison of private and federal screening operations. TSA officials said that they would permit pilot program airports to pursue both options beginning in November 2004. A second issue raised by stakeholders, including airport industry representatives, pertained to liability—whether and to what extent airports and private screening contractors would be liable in the event that a privately contracted screener should fail to detect a threat object that leads to a terrorist incident or use of a weapon, or threat object, on board. Thirteen airport operators we interviewed cited concerns about liability. In addition, all of the private screening contractors we interviewed cited the need for additional guidance on liability protection. To address airport and industry officials’ questions about liability issues, FAQs directs site visitors to a Department of Homeland Security (DHS) Web site for information on how to apply for liability protection under the SAFETY Act of 2002. The Web site, however, does not provide information on whether the SAFETY Act would cover private screening contractors that apply to the opt-out program. According to DHS officials, DHS has determined that the SAFETY Act will apply to private screening contractors in the opt-out program. DHS’s Office of SAFETY Act Implementation has been working with TSA to develop an expedited review process for SAFETY Act applications from private screening contractors and plans to post specific instructions for applicants on the SAFETY Act Web site. As of November 15, 2004, this information had not been posted. Five of the screening contractors we interviewed said the issue of whether they would receive liability protection was important and would greatly affect whether they would participate in the program, if selected as a qualified contractor. Two of the contractors said that without liability protection, they would not participate in the program because it would be too risky, exposing them to potential lawsuits. TSA officials said that commercially available insurance is available and that under the current screening pilot program, each of the contract screening companies had procured some amount of liability insurance for terrorist activities. A third issue pertained to the costs of participating in the program. Ten of the 20 airport officials we spoke with that have decided not to apply to opt out in 2004 cited questions about costs of participating in the program, and in at least one case said they did not have enough cost-related information. Representatives from two airport industry associations also mentioned this issue. The FAQs address cost issues in the context of budgets for federal and private screeners. For example, one question asks whether, if federal budget appropriations are not made in a timely manner, TSA will still be able to fund the private screening contractor during that period of time. TSA’s response identified that it would fund screeners through a continuing budget resolution passed by Congress. In response to a question about whether budget limitations will apply to either federal or contracted screeners, TSA’s response reiterates what is already stated in the written guidance—that all funding for the opt-out program will come from the same budget line items as federal screening operations. No additional information is provided. Three private screening contractors we interviewed also cited a need for additional information on the requirement to provide at least equivalent compensation and benefits to screeners transitioning from federal to private screening contractors, as required by ATSA. The contractors said they did not know whether they would be required to offer dollar-for- dollar parity for salaries and benefits or whether contractors must offer the same health care and other benefits that federal screeners receive. FAQs address this question in general terms, noting that private contractors will have “some flexibility in fashioning their compensation and benefits packages,” but do not elaborate further. TSA’s presolicitation includes a compensation and benefits certification form that private screening contractors applying to be on the qualified offeror list must complete. The applicant has to certify that it will propose and pay at least the minimum labor rate that is paid to screeners for every $1 of direct labor and that benefits will be not less than 44.75 percent—the current fringe benefits percentage as computed by TSA. In addition, TSA has interpreted ATSA to require qualified private screening companies to provide pay and benefits at a loaded cost (direct hour plus percentage cost of fringe benefits) that equals or exceeds the loaded cost of the pay and benefits provided to the federal government. According to TSA, this approach provides the contractor with flexibility to trade additional pay against other benefits, or to enhance certain benefits and reduce others; enables the contractor to determine and provide the best package necessary for the recruitment and retention of quality screeners; and increases flexibility while permitting recruitment and retention of quality screeners. TSA is developing measures to assess the screening performance of airports that will participate in the opt-out program and individual contractors performing the screening services, but specific performance measures have not been finalized. In June 2004, TSA developed a draft of the performance measurement principles and actual measures that TSA is considering to measure the performance of the entire opt-out program and private screening operations. For example, TSA plans to measure the results of annual screener proficiency reviews, customer satisfaction and complaints, and screening costs. TSA may also evaluate the program in terms of how well screeners perform using the threat image projection (TIP) system to detect threat objects.(TIP projects images of threat objects on an x-ray screen during actual operations and records whether screeners identify threat objects.) These measures will be similar to those used by the independent consulting firm to compare the performance of private screening contractors operating at the five pilot program airports against federal screening services. TSA expects to complete a preliminary draft of a performance measurement plan later this month and to finalize this plan by February 2005. In addition to assessing how the performance of federal and private screening services compare, TSA is working to develop performance measures for evaluating how well private screening contractors comply with the terms of their contracts. TSA officials said that the opt-out program office is in the process of determining whether quantifiable measures are available, how to collect relevant data, and the best way to establish baseline measures. TSA expects to complete its data collection plan later this month and to complete the final plan by February 2005. The contractor-related performance measures TSA plans to develop are to be included in a quality assurance plan. This plan is an element in TSA’s draft statement of work specifically for the five airports that participated in the pilot program, which are continuing to use private screeners. The plan includes general information on how TSA will measure and assess their performance and how TSA will use the performance data to make decisions on performance awards, extension of contracts, and termination of contracts. Contractors may, for example, be evaluated—and their contracts extended—based on their screeners' performance on TIP scores. TSA officials said that the measures included in the draft statement of work are not as sophisticated or rigorous as those that TSA will adopt in the future. TSA expects to implement these measures in mid-2005, as contracts are being awarded. We plan to continue to collect and analyze TSA documentation on the opt- out program and to follow-up with airports and private security contractors on their views of TSA’s development and implementation of the program. We provided a draft of this report to the Department of Homeland Security and the Transportation Security Administration for review and comment. The agencies generally agreed with our findings, and we incorporated their technical comments where appropriate. We are sending copies of this report to the Secretary of the Department of Homeland Security and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, or wish to discuss it further, please contact me at (202) 512-8777 or by e-mail at berrickc@gao.gov. Key contributors to this report were David Alexander, Amy Bernstein, Lisa Brown, Elizabeth Curda, Thomas Lombardi, Jobenia Odum, Lisa Shibata, Maria Strudwick, Nicole Volchko, and Nicolas Zitelli. Our preliminary observations are based on our review of documentation related to the Transportation Security Administration’s (TSA) Screening Partnership Program (opt-out program) and contract screening pilot program and interviews with various officials. We reviewed documents including: all guidance-related materials TSA had developed to date for airports; reports from an independent consulting study prepared for TSA that evaluated the contract screening pilot program and made suggested improvements to the program; information from two congressional committee-sponsored roundtables on the program; testimony at congressional hearings on the program; provisions of the Aviation and Transportation Security Act; our prior reports that addressed issues related to the opt-out program, including the performance of airport passenger and checked baggage screeners. In addition, we interviewed TSA headquarters officials, officials from two aviation associations—the American Association of Airport Executives and the Airports Council International and TSA’s private contractor that is assisting TSA in its development of the opt-out program. We conducted semistructured telephone interviews with the operators of 26 randomly selected commercial airports nationwide. The 26 airports were selected randomly from all airports in each of the five airport categories—X, I, II, III, and IV. Category X airports generally have the largest number of enplanements, and category IV airports have the smallest number. We interviewed one official at each of 4 to 6 airports in each category. Although the airports were selected randomly, because of the small sample size in each category the results of these interviews may not be generalized to other airports. We conducted telephone interviews with officials from 6 of 10 private security contractors selected from a listing of private security contractors that had expressed interest in the opt-out program. The listing of private security contractors that expressed interest in the opt-out program is included in Airport Security Report, (Potomac, MD: Air Safety Week), September 22, 2004, Volume 11, Number 19. The views and opinions of these contractors may not be representative of those of other contractors and, therefore, should not be generalized. TSA released significant details on its opt-out program guidance after our interviews; therefore, the interviewees’ responses (airport operators’ and contractors’) were based only on the information TSA had released at that point. We conducted our work from September to November 2004 in accordance with generally accepted government auditing standards. Because our review is still ongoing, the results presented in this report are preliminary. To complete our work, we plan to continue to collect and analyze TSA documentation related to each of our three objectives and to follow-up with airports and private security contractors on their views of TSA’s development and implementation of the opt-out program. | Beginning on November 19, 2004, the Transportation Security Administration (TSA) is required by law to begin allowing commercial airports to apply to use private contractors to screen passengers and checked baggage. A federal workforce has performed this work since November 2002, in response to a congressional mandate that the federal government take over screening services after the terrorist attacks of September 11, 2001. A 2-year pilot program at five airports testing the effectiveness of private sector screening in a post-September 11 environment concluded on November 18, 2004. This report contains GAO's preliminary observations related to TSA's progress in developing a private-sector screening program that allows airports to apply to opt out of using federal screeners. GAO assessed: (1) the status of TSA's efforts to develop policies and procedures for the opt-out program, including operational plans and guidelines for selecting airports and contractors that may participate; (2) guidance about the opt-out program that TSA has provided to airport operators and other stakeholders, or plans to develop, and how the information is communicated; and (3) TSA's efforts to develop performance measures for evaluating the opt-out program and contractor performance. As of November 2004, TSA has completed or is developing key policies and procedures for the opt-out program. Specifically, TSA has completed and released guidelines for determining how and when private screening contractors will be evaluated and selected to participate in the opt-out program. TSA also has released supplemental information for evaluating potential contractors, such as their financial capabilities. TSA also has prepared a draft technical statement of work for the private screening contractors operating at five pilot program airports, which is to serve as a basis for contractors seeking to serve other airports. In addition, TSA has developed, or is currently developing, internal guidance for managing the opt-out program, such as a transition plan for helping airports to move from federal to private screeners. TSA expects to complete the remaining policies and procedures by mid-2005. TSA is taking steps to communicate with stakeholders about the opt-out program by developing informational guidance and soliciting information and suggestions from them. For instance, since releasing initial summary guidance about the program in June 2004, TSA has posted an opt-out program application for airport operators that asks, among other things, the primary reason for wanting to participate in the opt-out program and the preferred timeline for transitioning to private screening operations. TSA also has posted lists of frequently asked questions and answers on its Web site, in response to questions from stakeholders about the airport application and contracting process. However, some airport operators, private screening contractors, and aviation industry representatives told GAO that they need additional information about how much leeway airports and contractors would have to manage the program, liability protection, and costs related to participating in the opt-out program. TSA is developing performance measures both to assess the screening performance of airports that will participate in the opt-out program and individual contractors performing the screening services, but specific performance measures have not been finalized. TSA said measures for the opt-out program will be based on measures already developed by an independent consulting firm for the five airports participating in the opt-out pilot program. These measures include how well screeners detect test threat objects, such as guns and knives, during screening operations. TSA is also developing performance measures to evaluate how well private screening contractors comply with the terms of their contracts, which will become part of a quality assurance plan. TSA expects to implement contractor-related performance measures in mid-2005, as contracts are being awarded. A draft of this report was provided to TSA. TSA officials generally agreed with our findings and provided technical comments that have been incorporated as appropriate. |
Historically, federal transportation policy has generally focused on individual modes rather than intermodal connections between different modes. Federal transportation funding programs are overseen by different modal offices within DOT—the Federal Aviation Administration (FAA), Federal Transit Administration (FTA), Federal Railroad Administration, and Federal Highway Administration (FHWA). No specific federal funding programs have been established that target intermodal projects for either passengers or freight although a few federal programs offer flexibilities that would allow these types of projects. Intermodal transportation refers to a system that connects the separate transportation modes—such as mass transit systems, roads, aviation, maritime, and railroads—and allows a passenger or freight to complete a journey using more than one mode. For example, an efficient intermodal capability at an airport would provide a passenger with convenient, seamless transfer between modes; the ability to connect to an extended transportation network; and high frequency of service among the different modes. As shown in figure 1, an intermodal connection at an airport might involve a passenger arriving at the airport by private shuttle service, flying to another airport, and then transferring to local rail service or a nationwide system, such as Amtrak, to reach a final destination. Similar to airline passengers, an intermodal freight transportation system relies on ready transport of cargo between ships and other transportation modes, particularly highway and rail. The scope and nature of intermodal passenger connections is further illustrated by ground access to airports. In 2005, we reported that most major U.S. airports have direct intermodal ground connections to either local transportation systems or nationwide bus or rail networks. Sixty- four of the 72 airports that we surveyed reported having direct connections to one or more local transportation systems in their area, such as local bus or rail service, with 26 airports reporting having both. The most common type of public transportation system available to and from the airport is local bus service. Sixty-four airports reported having a direct connection to a local bus service. However, the level of bus service varies depending on the airport. For example, Seattle-Tacoma International Airport has five public bus routes that serve the surrounding communities, while General Mitchell International Airport in Milwaukee has only one route that serves the airport. Twenty-seven airports reported having a direct connection to a local rail system, such as light rail, commuter rail, or subway. (See fig. 2.) While most major U.S. airports are located in metropolitan areas that have stations for nationwide transportation systems, such as Greyhound or Amtrak, 20 airports reported having direct connections to nationwide bus service or nationwide passenger rail service. Twelve of the 20 airports reported having direct connections to nationwide bus service, and 14 airports reported having a direct connection to Amtrak rail service. (See fig. 3.) All 14 airports provide shuttle service to transport passengers to Amtrak stations that serve the metropolitan area. One of the 14 airports— Newark’s Liberty International Airport—reported that passengers could also access the Amtrak station by an automated people mover. In addition, the accessibility of Amtrak to Newark airport has allowed Continental Airlines to establish a code share agreement with Amtrak, whereby passengers can purchase one ticket for a journey that includes travel by both air and rail. This agreement has allowed Continental Airlines to eliminate some short-haul flights from Newark. While there is no single federal funding source for rail to airport projects, we found that local governments, airports, and transit systems were able to tap and package a variety of federal funds to pay for recent rail connections to airports. These included direct appropriations, the New Starts program for fixed guideway transit systems, two federal aid highway categories—the Congestion Mitigation and Air Quality Improvement Program and the Surface Transportation Program—and passenger facility charges at airports. Appendix I describes these programs. According to transportation research, planning officials, and our prior work, a number of financing, planning, and other challenges play important roles in shaping transportation investment decisions and the development of intermodal capabilities. Significant challenges to the development of intermodal capabilities are the lack of specific national goals and funding programs. Federal funding is often tied to a single transportation mode; as a result it may be difficult to finance projects, such as intermodal projects, that do not have a source of dedicated funding. Federal legislation and federal planning guidance all emphasize the goal of establishing a systemwide, intermodal approach to addressing transportation needs. However, the reality of the federal funding structure—which directs most surface transportation spending to highways and transit and is more oriented to passengers than freight— plays an important role in shaping local transportation investment choices. In addition to the focus on highways and transit over other investment choices, we found limited instances in which investment decisions involved direct trade-offs in choices between modes or users— such as railroad versus highway or passenger versus freight. A significant challenge to developing certain intermodal connections is the difficulty of securing funding within the mode-specific federal funding structure. The cost of intermodal projects can vary widely, depending on the complexity and scope of the project. In addition, measuring and forecasting the benefits from individual projects can be hard to quantify, and we found only anecdotal evidence of benefits for the 16 intermodal projects we examined. The costs of rail projects are typically substantial and can include costs to construct a station, as well as track and other infrastructure to support the rail network. Table 1 provides examples of the costs of intermodal projects at airports and funding sources. We found that many intermodal projects at airports fit the funding criteria for one or more federal programs focused on surface transportation or aviation. For example, FTA’s New Starts program is a significant source of funding for intermodal capabilities at airports that are part of a rail transit system. However, the rigorous rating process and increasing demands for its limited funds make the New Starts program time-intensive and competitive in nature and has made it difficult for local transportation agencies to secure this funding, according to local officials that we spoke with. Federal funding programs, like the New Starts program, will contribute only a portion of the total project costs, subject to local matching funds, which can be derived from local agencies such as metropolitan transportation authorities, transit agencies, and airport authorities. However, local transportation officials said it can be difficult to secure local funds for intermodal projects at airports because these agencies could potentially have different funding priorities, making it difficult to build the unified local support necessary to secure funding. Additionally, intermodal capabilities at airports can be funded with passenger facility fees, commonly referred to as PFCs. Local transportation officials also described difficulties in securing the use of PFCs. In particular, requirements that PFC funds be used for projects on airport property, among other criteria, are seen as limiting their use for intermodal projects. Moreover, airlines support these restrictions on the use of PFC funds, believing that these funds are for airport development and capacity improvements, and not ground-access projects. However, even with this restriction, we reported in July 2005 that four airport authorities were using PFC funds to develop or contribute to intermodal projects at airports, as shown in table 2. In addition to the limits on the use of federal funds, federal transportation projects, including intermodal projects, face a number of planning challenges including the following: Decision makers must ensure that wide-ranging public participation is reflected in their deliberations and that their choices take into account numerous views. During the planning of an intermodal project, the lead local agency’s responsibilities include soliciting public comment regarding the most appropriate project to select for the area. This public participation can introduce considerations such as quality of life and other issues that are difficult to quantify in making transportation choices. It also puts decision makers in the position of balancing different public agendas about funding and values. The physical constraints of an area may present a challenge to building intermodal facilities. The development of intermodal capabilities at airports provides an example of this challenge. On the one hand, our work has found that densely populated urban areas offer few alternatives for expansion or new project development. On the other hand, it is these same densely populated urban areas where rail connections to airports are more likely to generate benefits that will justify the costs, as these areas may have high levels of congestion and larger numbers of people willing to use public transportation to access airports as a result. For example, since the proposed light rail line into the Minneapolis/St. Paul International Airport crossed land owned by various federal agencies, the process to gain the needed right-of-way was a multiagency effort that required significant coordination, adding somewhat to the project planning time and costs. Multijurisdictional transportation corridors present special challenges in coordinating investment decisions. Getting the cooperation of and coordination between these different officials can make the planning and implementation of multistate and multiregional projects difficult. For example, during the planning of the Seattle light rail, Sound Transit officials noted that the alignment from downtown Seattle to the Seattle- Tacoma International Airport ran through a number of surrounding cities and required three local cities to approve permits for the construction of the project. The effective use of passenger rail as an intermodal option along heavily traveled air and highway corridors also poses challenges due to limitations of the existing nationwide rail network. For example, Amtrak’s passenger rail network does not support air-rail service requirements because rail lines do not go near some airports, passenger train schedules in some parts of the country are not frequent enough to effectively link to airline flight schedules, and transferring from air to rail poses inconveniences that limit consumer demand. As we discussed previously, although 14 airports reported having a direct connection to Amtrak’s passenger rail service, 1 reported that passengers could access the station by automated people movers—others required boarding a shuttle. In addition, although Amtrak track lines are adjacent to the Cleveland Hopkins International Airport, Amtrak officials stated that Amtrak trains run only twice a day along this line, which is not frequent enough to establish a code share agreement with an airline. Furthermore, transportation industry experts and European transportation officials have pointed out that high-speed passenger rail, including connections to congested airports, has provided an alternative for air travel in short-haul markets in Europe. There has been a reduction of air service between Paris, France, and Brussels, Belgium—a popular short distance city pair for travelers—due, in part, to the high-speed train service linking Paris Charles de Gaulle Airport and downtown Paris with Brussels. In the United States, few efforts have been made to use rail service to complement air service in this manner because, in part, the cost of establishing service is not likely to justify its benefits given that some distances are too great for rail to provide an attractive alternative transportation mode. Finally, intermodal capabilities, while offering benefits to mobility, may need to develop a demand over time. For example, the development and use of intermodal connections at airports can be limited by the inability of the ground connections to meet the preferences of airline passengers, therefore, the majority of passengers still use private vehicles to access airports even when transit service is available. Passenger preferences can include seamless transitions from one mode to another; a simplified process to handle baggage; transit schedules that meet consumer demands; and clear, easy-to-follow information on accessing transportation options—including signs at airports and information at hotels on accessing transit to airports. In addition, passengers, particularly those traveling with children and large amounts of luggage, may not consider using transit or rail systems to complete their travel plans due to inconvenience. Two general strategies could help public decision makers improve intermodal options. These strategies are based on a systematic framework that has the following three components: Set national goals for the system. These goals, which would establish what federal participation in the system is designed to accomplish, should be specific and measurable. Clearly define the federal role relative to the roles of state and local transportation agencies and the private sector. The federal government is one of many stakeholders involved in the development of intermodal capabilities. This component is important to help ensure that the federal role supplements and enhances the participation of other stakeholders and appropriately balances public investment when the benefits flow in part to the private sector. Determine which funding approaches—such as alternatives to investment in new infrastructure and those approaches that reward projects that advance national/federal goals—will maximize the impact of any federal investment. This component can help expand the ability to leverage funding resources and promote shared responsibilities. Given the current budgetary environment, and the long-range fiscal challenges confronting the country, substantial increases in funding for transportation projects will require a high level of justification. In addition, either strategy would be enhanced by a process for evaluating performance periodically to determine if the anticipated benefits from federally-funded projects are accruing as expected. In the first strategy, Congress could encourage the development of intermodal capabilities by increasing the flexibility with current federal transportation programs, which are largely focused on individual transportation modes, to a more systemwide approach across all modes and types of travel. To promote intermodal development, the federal government could consider several alternatives for transportation planning and funding that might better focus on these outcomes and promote better coordination between jurisdictions. These alternatives include the following: Increasing the flexibility of federal transportation funding programs to help break down the current funding stovepipes. Applying different federal matching criteria for different types of expenditures in order to provide a higher level of federal matching for projects that reflect federal priorities. Establishing performance-oriented funding or a reward-based system that would favor those entities that address the national interest and meet established intermodal goals. Expanding support for alternative financing mechanisms—such as providing credit assistance to state and local governments for capital projects and using tax policy to provide incentives to the private sector for investing in intermodal capabilities—to access new sources of capital and stimulate additional investment in intermodal capabilities. Aligning incentives for planning agencies to adopt best practices and to achieve expectations. While this strategy would involve changes in federal transportation policy, it would most likely not involve a major shift in the federal role, which would continue to be focused on funding and oversight of locally determined and developed transportation projects. However, since this strategy would include the goal of establishing a more systemwide approach to transportation planning, the federal government would need to determine the scope of its involvement in encouraging such an approach. The second strategy is a fundamental shift in federal transportation policy’s long-time encouragement of state and local decision making by increasing the role of the federal government in planning and funding intermodal projects in order to develop more integrated intermodal networks, either nationwide or along particularly congested corridors. This strategy could be similar to the strategy the federal government used in the 1950s to develop the interstate highway system. Under this strategy, Congress could establish national goals for the development of intermodal capacities that could include not only the development of facilities and connections, but also the development of a supporting transportation network to improve the ability of either passengers or freight companies to reach their final destination. The role of the federal government would change, with the federal government taking a more active role in setting priorities and planning of intermodal connections between the individual transportation modes. Similar to the development of the interstate highway system, the federal government’s role could include providing project specific oversight, laying out routes, overseeing construction, and ensuring that the system is adequately maintained. For the federal government to take a more active role in developing intermodal capabilities, it might also need to take on additional funding responsibilities. An example would be if a federal policy were established to develop a transportation system that promoted connections between airports and high-speed rail networks, as in Europe. To accomplish improved air-rail connections, the federal government would have to increase its funding role due to the high costs of enhancing or expanding rail service or developing high-speed rail corridors or tap others that would benefit from such service, including the region, its airport, and businesses associated with the airport as possible funding sources. The full costs of this policy would be dependent on how integrated and expansive such an intermodal network would be and whether it would include additional high-speed rail or be focused on conventional passenger rail service. We have shown in the past that both of these choices are costly and increased federal involvement could require the implementation of a dedicated funding source. However, even if a revenue source is established, this new funding would face many of the same revenue challenges that other transportation systems, such as highways, are facing now as revenues sources are eroded. Additionally, given the high costs of this strategy, benefits high enough to justify investment in intermodal facilities would likely be anticipated in a limited number of places. Increasing passenger travel and freight movement have led to growing congestion, and decision makers face the challenge of maintaining the nation’s mobility while preventing congestion from overwhelming the transportation system. Successfully addressing mobility needs in the face of growing congestion requires both strategic and intermodal approaches. However, the current system for planning and financing transportation is not well-suited to advancing intermodal transportation projects— including both passenger and freight transportation—calling for fundamental changes that use a broader, systemwide approach to transportation investment decisions. A federal strategy of encouraging a more systemwide approach to transportation planning, including alternative funding mechanisms, could encourage transportation officials to consider the development of additional intermodal connections in the context of other transportation investment decisions. At the same time, it is clear that more quantitative evaluations of the costs and benefits of intermodal capabilities could help to better inform state and local, as well as federal decision makers, as they attempt to determine which projects to develop with their limited resources. Mr. Chairman, and members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions you or other members of the Subcommittee might have. For information on this testimony, please contact Katherine Siggerud at (202) 512-2834 or siggerudk@gao.gov. Individuals making key contributions to this testimony are Teresa Spisak and Tim Schindler. Selects worthy fixed guideway transit projects for funding by congressional appropriations. Projects can include heavy, light, and commuter rail and certain bus transit projects (such as bus rapid transit). To be eligible for funding, projects must, among other things, be justified based on a comprehensive review of mobility improvements, environmental benefits, cost effectiveness, and operating efficiencies, as well as being supported by an acceptable degree of local financial commitment. The program funding match is at most 80 percent federal and 20 percent local. In fiscal year 2006, this program was funded at $1.2 billion. Funds transportation projects and programs in order to reduce transportation-related emissions in localities with poor air quality. To be eligible for funding, projects must be transportation related, in nonattainment or maintenance areas, and reduce transportation-related emissions. The program funding match is 80 percent federal and 20 percent local. In fiscal year 2006, this program was funded at $1.7 billion. Provides funding to states and localities for projects on any federal-aid highway—including transit capital projects and local and nationwide bus terminals and facilities. The program funding match is 80 percent federal and 20 percent local. In fiscal year 2006, this program was funded at $6.3 billion. Provides federal credit assistance for surface transportation projects. Project sponsors may include public, private, state, or local entities. Projects eligible for federal assistance through existing surface transportation programs, including passenger bus and rail facilities, are eligible for credit assistance under this program. The amount of federal credit assistance may not exceed 33 percent of the reasonably anticipated project cost. In fiscal year 2006, this program was funded at $130 million. Provides grants to airports for planning and development projects. The program is funded, in part, by aviation user excise taxes, which are deposited into the Airport and Airway Trust Fund. In terms of promoting intermodal capabilities, these funds may be used for access roads that are on airport property, airport owned, and exclusively serve airport traffic. The program funding match is 75 to 90 percent federal based on the number of enplanements at the airport and the remainder is from local sources. In fiscal year 2006, this program was funded at $3.5 billion. We found no example of its use for intermodal projects. Authorizes commercial service airports to charge passengers a boarding fee—commonly called a passenger facility charge—of up to $4.50, after obtaining FAA approval. The fees are used by the airports to fund FAA-approved projects that enhance safety, security, or capacity; reduce noise; or increase air carrier competition. In calendar year 2005, $2.4 billion in fees were collected under this program. AirTrain automated people mover at New York’s John F. Kennedy International Airport and Newark’s Liberty International Airport Light rail extension and new station at Portland International Airport uating New Srt proposa, FTA plce greter priority on project tht hve greter locl mtching re. Competitive New Srt proposa often hve 40-50 percent locl mtch. ir quality ndrd exit for certin common ir pollnt (known as criteri pollnt). Geogrphic reas tht hve level of criteri pollnt above thollowed y the ndrd re clled nonttinment reas. Areas tht did not meet the ndrd for criteri pollnt in the pasbut hve reched ttinment re known asintennce reas. CAn enplnement i defined as assenger rding flight. Enplnement inclde passenger rding the firt flight of their trip, as well asassenger who rd fter connecting from nother flight. Freight Transportation: Short Sea Shipping Option Shows Importance of Systematic Approach to Public Investment Decisions. GAO-05-768. Washington, D.C.: July 29, 2005. Intermodal Transportation: Potential Strategies Would Redefine Federal Role in Developing Airport Intermodal Capabilities. GAO-05-727. Washington, D.C.: July 26, 2005. Highway and Transit Investments: Options for Improving Information on Projects’ Benefits and Costs and Increasing Accountability for Results. GAO-05-172. Washington, D.C.: January 24, 2005. Surface Transportation: Many Factors Affect Investment Decisions. GAO-04-744. Washington, D.C.: June 30, 2004. Freight Transportation: Strategies Needed to Address Planning and Financing Limitations. GAO-04-165. Washington, D.C.: December 19, 2003. Marine Transportation: Federal Financing and a Framework for Infrastructure Investments. GAO-02-1033. Washington, D.C.: September 9, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Mobility--that is, the movement of passengers and goods through the transportation system--is critical to the nation's economic vitality and the quality of life of its citizens. However, increasing passenger travel and freight movement has led to growing congestion in the nation's transportation system, and projections suggest that this trend is likely to continue. Increased congestion can have a number of negative economic and social effects, including wasting travelers' time and money, impeding efficient movement of freight, and degrading air quality. U.S. transportation policy has generally addressed these negative economic and social effects from the standpoint of individual transportation modes and local government involvement. However, there has been an increased focus on the development of intermodal transportation. Intermodal transportation refers to a system that connects the separate transportation modes--such as mass transit systems, roads, aviation, maritime, and railroads--and allows a passenger to complete a journey using more than one mode. This testimony is based on GAO's prior work on intermodal transportation, especially intermodal ground connections to airports, and addresses (1) the challenges associated with developing and using intermodal capabilities and (2) potential strategies that could help public decision makers improve intermodal capabilities. A number of financing, planning, and other challenges play significant roles in shaping transportation investment decisions and the development of intermodal capabilities. Significant challenges to the development of intermodal capabilities are the lack of specific national goals and funding programs. Federal funding is often tied to a single transportation mode; as a result it may be difficult to finance projects, such as intermodal projects, that do not have a source of dedicated funding. In addition, federally funded transportation projects, including intermodal projects, face a number of planning challenges. These challenges include limits on the uses of federal funds, ensuring that widespread public participation is reflected in decisions, physical and geographic land constraints, and the difficulty coordinating among multiple jurisdictions in transportation corridors. Finally, intermodal capabilities, while offering benefits to mobility, may need to develop a demand over time. Two general strategies developed from GAO's prior work would help public decision makers improve intermodal capabilities. Both strategies are based on a systematic framework that includes identifying national goals, defining the federal role, determining funding approaches, and evaluating performance. The first strategy would increase the flexibility of current federal transportation programs to encourage a more systemwide approach to transportation planning and development, but would leave project selection with state and local decision makers. The second strategy is a fundamental shift in federal transportation policy's focus on local decision making by increasing the role of the federal government in order to develop more integrated transportation networks. While the first strategy would most likely lead to a continued focus on locally determined and developed transportation projects, the second strategy could develop more integrated transportation networks, either nationwide or along particularly congested corridors. The second strategy could be costly, and high benefits, which may be difficult to achieve, would be needed to justify this investment. |
Since 2012, there has been a rapid increase in the number of UAC apprehended at the U.S.-Mexican border. According to DHS’s U.S. Customs and Border Protection (CBP), the number of UAC from any country apprehended at the border climbed from nearly 28,000 in fiscal year 2012 to more than 42,000 in fiscal year 2013, and to more than 73,000 in fiscal year 2014. Prior to fiscal year 2012, the majority of UAC apprehended at the border were Mexican nationals. However, as figure 1 shows, more than half of the UAC apprehended at the border in fiscal year 2013, and nearly three-fourths of UAC apprehended in fiscal year 2014, were nationals of El Salvador, Guatemala, and Honduras. Recent data indicate the pace of migration from Central America remains high, though fewer migrants are being apprehended in the United States. According to DHS’s Border Patrol—a component of CBP—through May 2015, there have been nearly 23,000 UAC apprehensions at the southwest border in fiscal year 2015—compared with about 24,500 through May of fiscal year 2013 and nearly 47,000 through May of fiscal year 2014. However, according to research from the nongovernmental organization (NGO) the Washington Office on Latin America, Central American migrants are being detained in Mexico at a higher rate this year compared with last year, with more than 90,000 Central American migrants detained in Mexico during the first 7 months of fiscal year 2015 compared with around 50,000 during the same period of fiscal year 2014. Since 2012, apprehensions of UAC at the U.S.-Mexican border have generally increased between January and May, as shown in figure 2, according to data from DHS’s Border Patrol. These three countries face a variety of socioeconomic challenges. In February 2015, we reported that U.S. officials in El Salvador, Guatemala, and Honduras identified crime and violence, and economic and educational concerns, as among the primary causes of UAC migration to the United States. According to the United Nations Office on Drugs and Crime, these countries had among the top five highest homicide rates worldwide in 2012, the most recent year for which these statistics were available for all three countries, with Honduras ranking first, with a homicide rate of 90.4 per 100,000 inhabitants; El Salvador fourth, with a rate of 41.2; and Guatemala fifth, with a rate of 39.9. According to this UN office, the surge in homicide levels in Central America in recent years is largely a result of violence related to the control of drug trafficking routes, turf wars between criminal groups, and conflict between organized criminal groups and the host government. A 2014 United Nations High Commissioner for Refugees study on UAC from Central America and Mexico noted that nearly half of the UAC interviewed for the study reported being affected by violence committed by gangs or drug cartels, while about a fifth reported being victims of domestic abuse. In addition, according to 2011 World Bank data, more than 60 percent of Hondurans, more than 50 percent of Guatemalans, and 30 percent of Salvadorans live below the poverty level. According to a Wilson Center publication, nearly 2 million Central Americans between the ages of 15 and 25 do not have a job or go to school, and the highest proportion of these youth come from El Salvador, Guatemala, and Honduras.In addition, all three countries rank below the regional average on Transparency International’s Corruption Perception Index. In September 2014, the governments of El Salvador, Guatemala, and Honduras issued a regional plan in response to the recent migration increase. The plan, referred to as the Plan of the Alliance for Prosperity in the Northern Triangle: A Road Map, outlines four strategic actions, which seek to stimulate the productive sector to create economic opportunities, develop opportunities for people, improve public safety and enhance access to the legal system, and strengthen institutions to increase people’s trust in the state. The plan notes that income inequality presents a major challenge to the three countries, as 20 percent of the wealthiest segments of the population account for more than half of overall income. In addition, the plan identifies a limited supply and quality of services in housing, early childhood health care, nutrition, and child development as among the main challenges to development, while also noting that weaknesses in the countries’ educational systems have resulted in workforces with less schooling and more limited skills as compared with those of other countries in the region. In addition to the problems within these countries, children who migrate illegally can encounter other risks during the journey to the United States. The journey to the United States may be hundreds or over a thousand miles, and some make this journey on foot over desert terrain, where daytime summer temperatures can exceed 110 degrees Fahrenheit. Others travel on top of trains, such as La Bestia, or “the Beast,” the name given to the cargo trains that transport goods through Mexico to the United States. According to NGO reports, whether traveling by foot or on trains, child migrants are exposed to various dangers, including robbery, extortion, forced recruitment into gangs, abandonment, rape, and murder. A number of U.S. agencies provide assistance to the three countries. For example, USAID, State, DHS, IAF, and MCC have programs providing assistance in areas such as economic development, rule of law, citizen security, law enforcement, education, community development, and others. In fiscal year 2014, USAID, State, DHS, and IAF allocated a combined $44.5 million for El Salvador, $88.1 million for Guatemala, and $78 million for Honduras. In addition, MCC signed a threshold program agreement with Honduras in fiscal year 2013 totaling $15.6 million, a compact agreement with El Salvador in fiscal year 2014 totaling $277 million, and a threshold program agreement with Guatemala in fiscal year 2015 totaling $28 million. The U.S. government also provides additional assistance through CARSI, which funds activities to improve law enforcement and justice sector capabilities, prevent crime and violence, and deter and detect border criminal activity, among other efforts in these three countries as well as in Belize, Costa Rica, Nicaragua, and Panama. Between fiscal years 2008 and 2014, U.S. agencies allocated more than $800 million for CARSI activities from various accounts. We previously reported that, through June 2013, more than 50 percent of funds allocated for CARSI activities had been designated for activities in El Salvador, Guatemala, and Honduras. Additional information on agency funding to the three countries is provided in appendix II. The administration has taken several recent actions related to these three countries. In December 2014, the administration started an in-country refugee and parole program, which is intended to allow certain parents who are lawfully present in the United States to request access to the U.S. Refugee Admissions Program for their children still living in El Salvador, Guatemala, and Honduras. In addition, in March 2015, the administration issued the U.S. Strategy for Engagement in Central America, which includes the primary objectives of prosperity, governance, and security, and goals of an economically integrated Central America that is fully democratic; provides economic opportunities to its people; enjoys more accountable, transparent, and effective public institutions; and ensures a safe environment for its citizens. According to the strategy, important successes would include the establishment of strong regional coordination mechanisms and institutions; reducing violence to a point where no Central American country is among the top 10 countries in terms of homicide rates; a 50 percent reduction of the youth unemployment rate in Honduras, El Salvador, and Guatemala; full implementation of ongoing electrical interconnection projects and other initiatives aimed at making energy more affordable, cleaner, and more sustainable; and steady economic growth throughout the region such that the poverty rate is pushed to below 40 percent over the next decade. In its fiscal year 2016 budget request, the administration requested $1 billion for Central America, an increase of more than 200 percent from fiscal year 2014 levels. State and USAID have stated that the funding would support the U.S. Strategy for Engagement in Central America and the priority objectives identified in the Alliance for Prosperity Plan. According to agency budget documents, funding would seek to address the underlying factors of undocumented migration from Central America, among other priorities. Agency officials noted that the rapid increase in UAC migration is due to several emergent factors, including the proliferation of human smugglers, or coyotes. Agencies have taken some actions in response to the rapid increase in migration, including several intended to directly reduce illegal migration and combat coyotes. In addition, agencies have a number of long-standing efforts, developed prior to the increase in migration, that seek to address pervasive violence, poverty, and other conditions agencies also identified as contributing to migration. Agency officials noted that a variety of factors likely caused the recent rapid increase in UAC migration, including the increased presence of coyotes, perceptions concerning U.S. immigration law, and recent improvements in the U.S. economy. In addition, agency officials noted that some pervasive problems have recently intensified in some places, including rising levels of violence and insecurity and worsening economic and social conditions. All agency officials we spoke with from all three countries identified several emergent factors as likely triggering the rapid increase in migration, including the growing presence of coyotes as one of the top factors. Agency officials from all three countries that we spoke to said that coyotes had proliferated and grown more influential and sophisticated in recent years. Officials from USAID and State in all three countries noted that coyotes were often well known and trusted in communities. According to USAID officials, when they conducted focus group interviews in Honduras with youth and outreach center coordinators in high-risk communities, including San Pedro Sula, participants noted that coyotes were easy to access. In addition, agency officials we spoke to in all three countries noted that coyotes had instituted new marketing and messaging tactics. For example, numerous officials in all three countries told us that coyotes offered package deals, such as offering three attempts to migrate to the United States for one fee—known as a “three- for-one” deal. Coyotes have also intentionally spread rumors and misinformation about U.S. immigration policy. For example, agency officials told us that, in some cases, in an effort to drive smuggling business, coyotes led many people to believe children could migrate to the United States and receive permission to stay indefinitely, if they arrived by a certain date. According to agency officials, general perceptions concerning U.S. immigration policy have played a growing role in UAC migration. Agency officials noted they relied on outreach efforts, focus groups, and other information sources to try to understand this factor. According to State officials in El Salvador and Guatemala, local media outlets have optimistically discussed comprehensive immigration reform efforts in the United States and sometimes failed to discuss the complexity of immigration reform. According to State officials, many Guatemalan citizens believe undocumented migrants in the United States will be encouraged to send for their children from Guatemala so they can come to the United States and they can benefit together for any upcoming comprehensive immigration reform, or even be eligible for Deferred Action for Childhood Arrivals. In addition, according to USAID officials, Honduran youth and coordinators of community centers who were interviewed as part of a USAID focus group indicated they believed the United States would allow migrant minors, mothers traveling with minors, and pregnant women to stay for a period of time upon arrival in the United States. Agency officials also noted that recent improvements in the U.S. economy had fueled increased UAC migration, enabling family reunification in the United States. In particular, State and USAID officials in Honduras noted that the improving economy had enabled parents who immigrated to the United States to send money back to their home country to pay coyotes so their children could migrate and reunify the family in the United States. According to officials in El Salvador, as the economy improved there, more Salvadorans have attempted to migrate to the United States to reunify with family. Agency officials noted that many children have spent years apart from their parents, being raised by a grandparent or another family member, and that in some cases, aging grandparents were no longer able to care for the children. This dynamic is further illustrated by examples from an internal USAID analysis of migration causes in Honduras, which highlighted migration-related risk factors associated with children living with extended family or non-family members in Honduras. Some agency officials also noted that the increased use of social media has enabled migrating families to be in more regular contact and to confirm if and when family members or friends arrive in the United States. Additionally, according to a study performed by State contractors in El Salvador, many people advertise immigration services through social media and offer travel services to ensure safe arrival in the United States. The use of social media can encourage migration, according to some agency officials. For example, officials in Guatemala noted that social media outlets enable migrants who have arrived in the United States to share messages and pictures with families in their home countries, an act that can serve as a powerful and influential endorsement of the decision to migrate. Agency officials in El Salvador similarly noted that when a potential migrant hears from someone in the United States who has managed to arrive and remain there undocumented, the communication can strongly influence their decision on whether to migrate. Violence, poverty, and poor access to education and other services have been pervasive development challenges in all three countries, predating the UAC migration increase. However, according to agency officials we spoke to in all three countries, some of these problems have grown worse in recent years and could have contributed to the rise of UAC migration. For example, in Honduras, agency officials noted that levels and perceptions of violence had grown worse, in part because of the rise in extortions. Worsening security concerns also negatively affect access to education. For example, agency officials in El Salvador noted that many children will not attend school after the seventh grade because traveling to some schools requires crossing gang borders, and that girls in particular face the risk of being attacked or raped en route. In Guatemala, agency officials stated that poor economic and social conditions in the Western Highlands had declined even further in recent years. In addition, agency officials noted that deteriorating climate conditions, including several consecutive years of drought and a coffee rust blight that has hurt coffee production and cost jobs in Honduras and Guatemala, exacerbated long-standing economic concerns in many communities. Agency officials told us that, on the basis of their outreach efforts, reporting from NGOs, academics, and government officials, and focus groups, they had determined that these longstanding problems had intensified. In addition, we met with children from all three countries who offered similar insights concerning the causes of migration. For example, children at a USAID outreach center in San Pedro Sula, Honduras, noted the lack of educational and job opportunities in their communities as a reason for migrating. Children from a particularly violent neighborhood told us it was even more difficult for them to obtain a job because potential employers would sometimes choose not to hire them because of where they live. Children also described the ways in which violence leads to migration, such as by making it difficult for them to attend school if doing so requires them to travel from one neighborhood to another and cross a gang border. Children at an outreach center in El Salvador also noted that sometimes, even with an education, one cannot find work in El Salvador and that there are more opportunities and chances to succeed in the United States. Children at this same center indicated that the desire to migrate is even stronger for children with parents in the United States. Children from a youth center in Guatemala also noted violence and economic factors as motivations for migration, and many of these children already had family or knew someone who had migrated. Among the various agency actions taken in response to UAC migration, several seek to directly combat coyotes, which agency officials identified as a key emergent factor causing migration. Agencies also have established efforts to increase legal migration and improve migrant return centers. In response to the increase in UAC migration, DHS and State have supported several law enforcement and legislative outreach efforts that have marked an increased focus on investigating and dismantling smuggling operations in all three countries. For example: According to DHS officials from Homeland Security Investigations, in response to the rapid increase in UAC migration in 2014, DHS shifted the investigative priorities of its Transnational Criminal Investigative Units (TCIU) to target child-smuggling operations in all three countries. The units include host government police, customs officers, and prosecutors, among others, and are intended to facilitate information sharing and rapid bilateral investigations involving trafficking of people, money, drugs, and weapons, and other priorities. The units have focused their efforts on seeking to identify and dismantle criminal organizations involved in smuggling. According to DHS and DOJ officials, TCIUs across the three countries coordinate efforts given the transnational nature of the smuggling rings. A DHS official in Guatemala told us the unit there was able to dismantle two of the seven criminal organizations it was investigating that were actively smuggling children. DHS, with State’s Bureau of International Narcotics and Law Enforcement (INL) funding, has indicated it plans to increase the size of these units. State/INL in Honduras is working with a DOJ resident legal advisor to assist the Honduran attorney general’s office in prosecuting trafficking and alien-smuggling cases. This assistance includes, among other things, providing training to Honduran prosecutors on developing cases against smuggling organizations. State/INL support in Guatemala has included assistance to reform police training, with a new emphasis on UAC-related issues in the community policing techniques, criminal investigations, and human rights curricula. State has participated in legislative and political outreach efforts to combat smuggling. For example, in Guatemala, State has advocated modifying certain laws that would better enable Guatemalan law enforcement to investigate and prosecute these cases and, if applicable, carry out appropriate penalties for the crimes. DHS and State carried out several public information campaigns between 2013 and 2015 intended to dissuade citizens of El Salvador, Guatemala, and Honduras from migrating to the United States. Between the aforementioned years, DHS carried out three public information campaigns. The campaigns in 2013 and 2014 focused on warning potential migrants of the dangers of the journey, while the 2015 campaign sought to increase awareness of requirements under the executive action on immigration, including Deferred Action for Childhood Arrivals, how it will be implemented, and who is eligible. This campaign was launched in January 2015 but was stopped February 16, 2015, because of a federal court ruling that granted a preliminary injunction to prevent expansion of Deferred Action for Childhood Arrivals, among other things. The campaigns ran in El Salvador, Guatemala, Honduras, and Mexico, with ads placed on radio and TV stations, and on billboards, bus stops, and in and on buses (see fig. 3). According to an official from DHS CBP’s Office of Public Affairs, in developing the 2013 dangers of the journey campaign, DHS incorporated feedback from various U.S. agencies and a working group composed of several DHS components; the embassies of El Salvador, Guatemala, and Honduras; and other international and nongovernmental organizations. DHS does not currently have an active campaign. The DHS official also noted DHS would like to develop a new campaign for early 2016 if funding is available. State public affairs officials we spoke to at the U.S embassies in all three countries told us they used the DHS campaign materials and developed their own materials to launch related public information campaigns in- country while also supporting similar host government campaigns. For example, in Honduras the U.S embassy’s public affairs section used social media and webinars to provide information on migration, while in Guatemala the public affairs section at the U.S. embassy there placed ads on newspapers, the radio, and buses. In El Salvador, public affairs officials from the U.S embassy collaborated with the host government to develop its message intended to deter migration. In addition, State officials from the consular affairs sections from the U.S. embassies in Guatemala and El Salvador have also made efforts to counter misinformation. In Guatemala, State consular officials from the U.S embassy incorporated UAC-related messages into their regular community presentations, such as by adding Power Point slides addressing the dangers of the journey and the importance of being aware of coyotes, while State consular officers from the U.S. embassy in El Salvador distributed information in consular waiting areas. In an effort to increase legal migration and reduce the number of children attempting to migrate to the United States, State and DHS have collaborated to implement a new in-country refugee/parole processing program. The program was announced in November 2014 and began accepting applications the following month. There is currently no deadline for filing an application through this program. Through this program, qualifying parents in the United States can petition on behalf of their children for refugee status and if the child is ineligible for refugee admission but still at risk of harm, the child may be considered for parole on a case-by-case basis. The child must be unmarried; under the age of 21; a national of El Salvador, Guatemala, or Honduras; and residing in his or her country of nationality. Once a parent submits an application through a designated resettlement agency, DNA tests are conducted to prove the biological relationship and the child is interviewed in country by a DHS official to determine whether the child qualifies for refugee status. As of June 2015, State officials in Washington D.C. reported that the program had received 1,385 applications. Of those applications, 1,139 are from El Salvador, 225 from Honduras, and 21 from Guatemala. State officials also reported no interviews have yet been conducted. A DHS official in Guatemala noted DHS needed to improve their advertising of the program to Guatemalan citizens. A State official in Washington, D.C., noted in particular that more information advertising the program in indigenous languages was needed. USAID and State have an interagency agreement to provide assistance to strengthen migrant reception and repatriation efforts in all three countries. Efforts under this program have included providing immediate, basic assistance to returnees (see fig. 4); undertaking construction efforts to expand and improve existing facilities; and working with host governments to systemize data gathered from the returned migrants. According to officials from the International Organization for Migration, an intergovernmental organization that is implementing the program, this improved data collection should enable better long-term tracking of migrants—including the causes of their migration—and their ability to return to and reintegrate with their communities of origin. USAID, State, IAF, and MCC programs have long sought to address what officials have identified as underlying causes of migration, including persistent development challenges such as violence, poverty, and lack of educational opportunities. For example, USAID’s Country Development Cooperation Strategies (CDCS) for each country identify citizen security and economic growth as strategic objectives. The agency supports programs in each country seeking to reduce violence, improve economic opportunities through improved agricultural practices and other efforts, and increase access to education and health services, among others. For example, the USAID crime and violence prevention project in El Salvador, which focuses on expanding community-based crime and violence prevention efforts and supporting the government of El Salvador’s National Strategy on Violence prevention, includes capacity building for municipalities to prevent violence and establishes outreach centers for youth and children. The agency also supports similar efforts in Guatemala and Honduras (see fig. 5). State has supported programs in each of the three countries we reviewed seeking to reduce violence and improve citizen security by offering training and technical support to prosecutors, the police, and border patrol units, among others (see fig. 6). For example, in Honduras, State officials at the U.S embassy described justice sector capacity-building efforts including strengthening host government judicial personnel’s ability to develop cases against criminal organizations and other efforts. This includes support for DOJ advisors in Honduras. IAF officials said that IAF supports local initiatives in more than 880 communities in El Salvador, Guatemala, and Honduras, with nearly half of its investment in the three countries intended to directly benefit youth through job creation and other community-based activities. For example, IAF provided training and technical assistance to help a farmer’s cooperative in El Salvador improve its production and marketing of organic coffee. Finally, MCC’s compact in El Salvador and threshold program in Guatemala—each in development prior to the recent migration increase—include programs to improve the quality of secondary education to assist youth in finding employment. USAID, State, and IAF outlined plans to modify some of these longstanding efforts in response to the rise in UAC migration. For example, in Guatemala, USAID outlined plans to increasingly target youth at risk of migration through various programs and to introduce agricultural programming, including coffee rust-resistant seedlings, and to provide nonagricultural economic opportunities for youth. State and DHS have outlined plans to strengthen border security efforts through their vetted units to stem migration, and to increase the size of antigang units in an effort to reduce violence. In addition, USAID, State, and IAF have each outlined plans to expand various programs to additional communities identified as having high levels of UAC migration. These efforts are discussed in more detail later in the report. Agencies have generally located programs in alignment with long-term objectives for El Salvador, Guatemala, and Honduras, such as addressing areas of high poverty and violence. In response to the rapid increase in UAC migration to the United States, agency officials said that they reviewed program locations and determined that most programs were already located in areas that had experienced high levels of UAC out-migration. However, officials also indicated that they have made some adjustments and plan to locate more programs in communities with high levels of UAC migration. Most of the agencies in our review have established development objectives for Central America, some of which predate the rapid increase in UAC migration. These objectives are outlined in various strategy and planning documents. In some cases, the development objectives outline priority geographic locations for programs that agencies have identified as addressing underlying causes of UAC migration, such as crime and poverty. USAID’s CDCS documents, for example, outline development objectives for each country that focus on specific locations. For example, in Honduras, the CDCS contains two development objectives. The first development objective seeks to increase citizen security for vulnerable populations in high-density urban areas with high crime rates, and specifies two cities—Tegucigalpa and San Pedro Sula—and municipalities in an area referred to as the Northern Corridor as the geographic focus. The second development objective, to reduce extreme poverty in western Honduras, specifies the six western departments, which all have severe rates of poverty and undernutrition, as the focus of programming. Similarly, the CDCS for Guatemala, which contains an overarching goal of developing a more secure Guatemala that fosters greater socioeconomic development in the Western Highlands and sustainably manages its natural resources, states that the majority of USAID resources will be allocated to programming in the Western Highlands, while USAID officials noted most of the remainder was dedicated to security efforts in urban areas. In El Salvador, where USAID aims to increase citizen security, rule of law, and economic opportunity, the CDCS states that the government of El Salvador has identified 54 of 262 municipalities as “high crime,” where USAID will focus its crime prevention and education activities. State country planning documents similarly highlight strategic priorities for the three countries, and in some cases outline priority geographic locations. State Integrated Country Strategy documents, which are multi- year mission planning documents, outline strategic priorities, but do not specify geographic locations to the same extent as USAID’s CDCS documents. State/INL’s country plans indicate that they are aligned with U.S strategies. In addition, these plans outline various program areas and goals associated with these program areas. The plans specify priority geographic locations for some programs associated with these program area goals. For example, in support of the INL goal in El Salvador of building the government’s ability to mitigate the influence of gangs and improve citizen security, the INL country plan outlines plans to launch the Model Police Precinct program in Quetaltepeque, Sonsonate, San Martin, Antiguo Cuscatlan, and La Libertad. MCC and IAF documents do not outline specific geographic priorities for programs in the three countries, but MCC and IAF officials offered examples of factors the agencies consider in determining where to locate programs. For example, according to IAF officials, IAF awards its grants to grassroots and community-based groups in communities with economic and social disadvantages, often with a history of and increased risk for migration. Geographic location is considered one element of the selection strategy for each country. Also, according to an MCC official in El Salvador, as part of a full-time inclusive school approach, the program may invest in science labs in certain schools in order for other schools in the vicinity to also access them. Agency officials told us they drew on various sources of information to understand which areas in El Salvador, Guatemala, and Honduras had high levels of UAC migration. In particular, they told us a key point of reference was a DHS-produced map that showed the number of UAC by location of origin based upon Border Patrol apprehension data from January 1 to May 15, 2014. We previously reported that agency officials used this map and DHS data on UAC locations of origin, along with other sources of information, to understand underlying causes of UAC migration and inform programming decisions. During our fieldwork for this review, agency officials provided similar responses, noting that they used this map and DHS data to understand UAC origins or to cross- reference data on UAC origins derived from other sources of information produced by entities such as the International Organization for Migration, USAID’s Office of Transition Initiatives, host government agencies, and other local organizations. Agency officials told us that while the DHS-produced map may have limitations, they believed it to be generally accurate. As we previously reported, CBP officials identified various challenges to obtaining UAC location information, including the inability of children to accurately relay information on their origins, lack of documentation, and inability of border agents interacting with children to collect or record their information accurately. State/INL officials in El Salvador, in particular, stated they did not believe that current available data on UAC origins were reliable as a basis for locating programs as some INL staff who have previously worked in the border regions believe that UAC provide false information because they are concerned about being traced back to their communities. In addition, some agency officials stated that the DHS map lacked specificity or detail, such as identifying neighborhoods associated with high UAC migration within the major cities. Nonetheless, USAID and State officials in the three countries told us that the top UAC locations of origin identified in the map were generally consistent, with a few exceptions, with their understanding of the top UAC locations of origin. Further, agency officials stated that their established programs were already located in these areas. In Honduras, where over half of the DHS- identified top 20 municipalities in terms of UAC locations of origin are situated, agency officials told us the DHS map confirmed for them that programs already existed in those locations. In Guatemala, USAID and State officials said that they consulted the DHS map and other available information about UAC origin locations and determined that there was a general overlap between those locations and agency programs. USAID officials in Guatemala noted that about 60 percent of the agency’s resources in Guatemala are used for activities in the Western Highlands, which these officials said they have identified as the primary area of UAC migration in that country. In El Salvador, USAID officials stated that, according to their review of the DHS map, their programs were already located in areas of high UAC migration. Finally, according to IAF, the DHS map illustrated a general overlap between the location of its grantees and locations with high levels of UAC migration. We obtained information on the location of USAID and State/INL-funded programs in El Salvador, Guatemala, and Honduras; the location of IAF grantees in these countries; and the top UAC locations of origin in each country, as identified by DHS. Figure 7 shows the number of UAC apprehended by U.S. Border Patrol between January and May 15, 2014, by location of origin in each department across the three countries. Figure 8 shows the total the number of State and USAID programs and IAF grantees in each department across the three countries. See appendix III for figures that disaggregate the number of program locations by country and by agency. Agencies have outlined plans and taken some steps in the three countries since the recent rise in UAC migration by adding or expanding activities in locations identified as having high levels of UAC migration. For example, according to State/INL’s current country plan for Honduras, State plans to expand violence prevention programs, such as the Gang Resistance Education and Training Program, to reach three new police metropolitan areas in Tegucigalpa and six police metropolitan areas in San Pedro Sula, two areas in the country agencies have identified as having among the highest levels of UAC migration. In El Salvador, USAID outlined plans to expand educational opportunities to youth in additional municipalities with high levels of migration. For example, the Adopt-A-School program, which aims to support the efforts of the private sector, individuals, or institutions who wish to provide financial and other support to schools, has already been extended and expanded to reach additional beneficiaries, including those in municipalities with high numbers of UAC who have been repatriated to their communities of origin. In Guatemala, USAID has outlined plans to expand citizen security efforts to areas outside Guatemala City with high levels of violence and UAC migration, and to expand crime prevention programs to departments in the Western Highlands affected by high levels of migration. USAID has also outlined planned programs that would target agriculture and small business development activities to departments in the Western Highlands with high migration levels. In addition, State/INL has outlined plans to expand municipal policing efforts to the Western Highlands in response to high levels of out-migration from those communities. Also, according to IAF officials, IAF issued grants to several organizations in recent years to support projects in migrant-sending communities or to address migrant issues. As of June 2015, IAF officials indicated IAF had identified at least 19 new programs and 14 modified programs in El Salvador, Guatemala, and Honduras that will seek to address underlying causes of migration in areas with high levels of UAC migration. However, agency officials also noted the importance of other factors in locating programs. Agency officials suggested that long-term strategic objectives, such as promoting economic growth and social development, remain the focus of their work as they create and locate programs. Moreover, other agency responses to the rise in UAC migration have sought to address UAC migration but have not necessarily been located in areas of high UAC migration. For example, efforts to rehabilitate repatriation centers are located near borders or transit points; legal technical assistance seeks to strengthen government institutional capacity to combat smuggling; countersmuggling operations target key migration routes; and public information campaign activities, such as radio, television, and social media placements do not necessarily target areas associated with high UAC migration. Most agencies have established evaluation processes to measure progress of programs identified as addressing causes of UAC migration. Agency processes to evaluate program effectiveness vary in approach, though DHS and State have not always obtained timely feedback on information campaigns intended to reduce migration, making it difficult to know the effectiveness of these efforts. Agencies have outlined challenges to and approaches for sustaining programs that seek to address the causes of UAC migration. Most agencies we reviewed have processes in place to track the progress of programs they have identified as addressing causes of UAC migration. However, DHS has not established performance targets against which to measure units that combat child smuggling, making it difficult to track the progress of these efforts. USAID has developed several documents to assist its missions in developing and managing monitoring and evaluation efforts, which include guidance on establishing performance indicators, baselines, and targets, and on planning and managing evaluations. USAID missions also articulate monitoring and evaluation plans through other key documents, including CDCS documents and Performance Management Plans (PMP). The three countries’ CDCS documents also outline illustrative evaluation questions, which can be used to guide long-term program evaluations. Table 1 provides examples of CDCS development objectives, performance indicators, and evaluation questions for El Salvador, Guatemala, and Honduras. Mission PMPs outline monitoring and evaluation plans—including performance indicators, baseline data, and performance targets—to assess progress toward the achievement of CDCS goals. USAID established a PMP for Guatemala that was approved, according to the agency, in October 2013, covering fiscal years 2012 through 2016, which outlines targets by performance indicator by fiscal year for each development objective. For example, for the mission goal of improving levels of economic growth and social development in the Western Highlands, one performance indicator is the prevalence of stunted children under 5 years of age in target regions, with, according to USAID officials, a goal of 63 percent in fiscal year 2013 and 54 percent in fiscal year 2017. USAID anticipates the PMP for El Salvador will be completed in October 2015, and the PMP for Honduras will be completed by late summer 2015. State’s INL bureau uses several documents to guide performance planning and country-specific measurement of programs it has identified as addressing causes of UAC migration. In 2013, State/INL developed several documents to assist INL personnel with designing, monitoring, and evaluating programs. INL country plans for these three countries outline priority areas for INL programs in each country as well as metrics for evaluation under these areas. These documents outline selected activities, indicators, and performance targets for the three countries across each INL program area. For example, the INL plan for El Salvador outlines an objective of establishing model police precincts, with performance targets such as establishing eight new model police precincts within 2 years, and with homicides decreased by 10 percent in these new model police precincts within 4 years. The INL plan for Honduras outlines an objective of embedding a resident legal advisor to improve case management capacity for complex crimes involving human trafficking, with performance targets such as prosecutors increasing their rate of case closure in each of the first 2 years. State and USAID use various reporting mechanisms to monitor the progress of activities identified as addressing causes of UAC migration. Both State and USAID use State’s Bureau of Western Hemisphere Affairs (WHA) annual Performance Plan and Report (PPR) to track the performance of CARSI activities. The fiscal year 2013 WHA PPR, which was the most recent PPR available at the time of our review, provides information on USAID and State performance outputs against established targets. The WHA PPR provides some mission-specific performance information, but generally measures progress at an aggregated regional level. USAID and State also receive progress reports, such as weekly and quarterly reports, that track outputs, activities, and accomplishments during the reporting period. For example, weekly reports on State/INL- funded efforts to strengthen the Honduran border patrol provide updated information on activities such as the number of UAC encountered, as well as seizures, arrests, and inspections, with specific information on coyote arrests. INL quarterly CARSI reports provide updated program information on various CARSI efforts. IAF has developed an approach, which it refers to as the Grassroots Development Framework, to monitor its efforts, including selecting project-specific performance indicators and progress reporting. At the onset of specific projects, IAF works with grantee partners to select, from a standard set of 41 indicators, a subset of indicators considered most relevant to measuring the project’s desired objectives. Indicators cover areas such as how projects provide for basic needs, training, jobs and income, or how they improve organizational culture and capacity, among others. According to IAF officials, IAF works with grantees to ensure they know how each indicator is defined, to establish baseline conditions against which to measure progress, and to assist them in collecting performance data. Grantees are then required, according to IAF officials, to report every 6 months throughout the grant period on their progress against the selected indicators. IAF’s office of evaluation verifies and aggregates results reported by grantees for its annual grant results report, which provides information on basic activity outputs, such as the number of beneficiaries receiving better access to health care as a result of a project, as well as on what it refers to as intangible results, such as the number of grantee partners reporting that individuals had improved their self-esteem as a result of the project. IAF outlined many performance indicators that would be used to measure the 17 projects it identified as having been developed or modified in response to the rapid increase in UAC migration in El Salvador, Guatemala, and Honduras. Among the most commonly used indicators for these 17 projects include the acquisition and application of knowledge and skills and the mobilization of resources. MCC has not yet developed monitoring and evaluation plans for its compact with El Salvador or threshold program with Guatemala but expects to do so in the near future. According to MCC’s compact with El Salvador, MCC and the Salvadoran entity managing the compact will develop a plan to monitor whether projects are on track to achieve their intended results and evaluate to assess project implementation strategies, provide lessons learned, determine cost-effectiveness, and estimate the compact’s impact. The compact also notes that the results of these activities will be made publicly available on MCC’s website. According to MCC officials, a monitoring and evaluation plan for the El Salvador compact will be developed 90 days after the compact enters into force. According to MCC officials, MCC plans for the compact to enter into force no later than September 30, 2015. According to MCC officials, MCC expects to develop a monitoring and evaluation plan for its threshold program with Guatemala sometime between September 2015 and March 2016. DHS/ICE has established performance indicators for its TCIUs, but has not established performance targets, making it difficult to track progress of these units’ efforts to combat UAC smuggling and other priorities. We have previously reported that performance measurement allows organizations to track progress in achieving their goals and gives managers crucial information to identify gaps in program performance and plan any needed improvements. In addition, according to Standards for Internal Control in the Federal Government, managers need to compare actual performance against planned or expected results and analyze significant differences. DHS’s Transnational Criminal Investigative Unit Executive Report provides overviews of TCIU efforts by country, including country-specific priorities. The report also outlines basic performance indicators used to track TCIU success. These measures are divided into three performance categories—enforcement, capacity building, and intelligence—with various types of outputs by category. The report also outlines success stories and enforcement statistics by country, such as the number of arrests and seizures. However, DHS/ICE has not set targets for these performance measures. A DHS official told us that State/INL, which provides funding for these units, has lead responsibility for measuring their performance. State/INL country plans for all three countries include a performance indicator of the number of unit investigations conducted, and performance targets for the number of investigations conducted within 2 and 4 years. According to DHS/ICE officials, however, DHS was not involved in the development of State/INL’s performance indicators and targets related to these units. When asked, DHS officials did not indicate to us why they had not established performance targets to accompany the performance indicators DHS has already outlined in its Transnational Criminal Investigative Unit Executive Report. Such targets would enable DHS to compare outputs—such as arrests made, investigations conducted, or foreign counterpart operations—against the pre-established targets, and to better assess TCIU efforts. USAID, State, and IAF have established processes to evaluate the effectiveness of their programs. However, DHS and State have not consistently evaluated their information campaigns intended to reduce migration, making it difficult to know the effectiveness of these efforts. USAID conducted several recent evaluations of its programs developed before the rapid increase in UAC migration but identified as addressing the causes of migration. These included evaluations of programs addressing crime and violence prevention and workforce development. For example, in July 2014, USAID published an evaluation of a workforce readiness program, which seeks to strengthen the basic workforce competencies of Honduran youth. The study, which was intended to understand the characteristics of youth participating in the program and the extent to which they had improved perceptions of their employability after participating in it, concluded that youth saw significant gains in job- seeking behaviors, soft-skills development, and number of internships obtained. The study also noted more work needed to be done with youth to understand the skills necessary to compete for their desired jobs. In addition, in October 2014, a study requested by USAID evaluating the impact of community-based crime and violence prevention programs in El Salvador, Guatemala, Honduras, and Panama was published. The evaluation, which gauged perceptions over time of crime victimization and citizen security, found that, as a result of USAID’s community-based prevention programs, residents feel safer, perceive less crime and fewer murders, and express greater trust in police. The study also found a decline in reported murders and extortions in participating communities across the four countries. According to a USAID official, the findings from these evaluations inform future programming in several ways. First, USAID used these studies to inform the design of the current CDCS for Honduras, which was approved in December 2014. Second, the official noted that the workforce readiness program assessment led USAID to more systematically pre- assess students before enrolling them in workforce readiness training programs, which has led to improved certification rates among enrollees. The USAID official also noted that the workforce readiness evaluation is assisting the agency in developing a new workforce development activity intended to better link training and employment for at-risk youth. Third, the evaluation of the community-based crime and violence prevention programs led to a broader commitment from USAID to apply community policing principles to improve law enforcement and reduce violence and homicides in Honduras, according to the official. USAID officials and documents indicate that USAID plans to measure the impact on migration of some future programs. USAID officials in Honduras and Guatemala noted the agency has considered developing indicators that could measure the effect of programs on migration, such as whether a program affected a person’s decision to migrate. Two documents for planned USAID projects in Guatemala outline efforts to measure programs’ impact on migration. One document, for a proposed youth employment project, notes that the program’s failure would be judged when a youth drops out of the program because he or she migrated to the United States, moved, or became involved in criminal activities. Another document for a planned community-level violence reduction project outlines a series of proposed results and performance indicators, including one result of a reduced number of under-aged migration from targeted communities with a related indicator of percentage of households that report having sent under-aged youth to the United States in the last 12 months. USAID officials in El Salvador, however, noted that it would be difficult to measure the impact agency programs have on decisions to migrate, in part because some migrate for purposes of family reunification. State/INL awarded a contract to evaluate all countries under the CARSI program, including programs in El Salvador, Guatemala, and Honduras, which began in September 2014, according to State/INL. According to State/INL, in addition to other oversight mechanisms, this contract will evaluate projects that are designed to address causes of UAC migration. This evaluation, which according to State/INL is scheduled to be completed in September 2016, is expected to examine whether all planned activities are being implemented and on schedule, whether activities are sustainable, what impact the programs have had, and whether programs have led to any unintended consequences, among other things. IAF conducts two types of project evaluations. First, IAF conducts an end- of-project assessment for all projects. According to IAF officials, upon the completion of the grant, IAF conducts a close out visit to assess the extent to which the project’s objectives have been achieved. According to IAF officials, for this process the grantee partner compiles a narrative that details the project’s design, implementation, results, and expected sustainability and impact, and identifies what worked and what did not and key lessons learned. IAF also compiles summaries of best practices relevant to each project, according to IAF officials. Second, each year IAF evaluates a smaller selection of projects that ended 5 years earlier. According to IAF officials, IAF conducts these evaluations, which it refers to as ex-post assessments, to determine the extent to which projects proved to be sustainable subsequent to IAF’s involvement. IAF began conducting ex-post assessments in 2009, according to IAF officials. These evaluations provide information such as project results, sustainability, and lessons learned. According to IAF officials, IAF has thus far conducted two ex-post assessments of projects in El Salvador, including an evaluation of a project intended to strengthen civic engagement and an evaluation of an agricultural assistance project. IAF has also conducted an ex-post assessment of a project to train midwives in Guatemala. IAF has not evaluated a project in Honduras. According to IAF officials, in 2015, IAF plans to evaluate projects with a focus on youth engagement, including two projects in El Salvador and one in Guatemala. IAF expects these evaluations to be available in 2016. IAF is considering adjusting evaluations in response to the increase in UAC migration. According to IAF officials, IAF is examining whether it could adapt its grassroots development framework to capture migration data, such as by developing a new indicator related to migration. IAF officials also noted that IAF may add questions to focus group sessions— which IAF conducts to obtain qualitative information about programs— about why people decide to migrate from or stay in their home countries. DHS has not evaluated all of its public information campaigns intended to reduce migration. As we noted earlier, DHS carried out campaigns in 2013 and 2014 focused on the dangers of migration, and in 2015 to increase awareness of requirements under the President’s executive action on immigration. DHS’s Assistant Secretary for International Affairs and Chief Diplomatic Officer has referred to these campaigns as essential in combating the misinformation promoted by smuggling organizations, and stated in March 2015 that DHS will continue to support them. DHS ran its 2013 Dangers of the Journey campaign between February and May, a peak migration period that year and in recent years. Specifically, according to UAC apprehension data from DHS’s Border Patrol, the months with the most UAC apprehensions between fiscal years 2010 and 2013 were, in order, March, April, and May. In fiscal year 2014 the top four months were, in order, June, May, April, and March. At the conclusion of the 2013 campaign’s first phase, in April 2013, DHS evaluated the campaign, contracting for a survey of 1,800 citizens in El Salvador, Guatemala, and Honduras (600 per country), including an equal mix of youth and parents, to assess the campaign’s impact. Among the survey’s findings were that 72 percent of youth and adults recalled seeing the campaign and 43 percent recalled the campaign’s tagline. The survey results concluded that the campaign was highly credible, as it reinforced information respondents had experienced firsthand. However, according to a DHS/CBP official, CBP reporting and State research indicates individuals in El Salvador, Guatemala, and Honduras give more credence to what they hear from relatives and friends than to what they hear on the radio and television. Despite this reporting, DHS has not conducted any subsequent campaign evaluations. Instead DHS launched its 2014 campaign at the end of June, by which point migration levels had already peaked and reached record levels, as shown in figure 9. Although DHS tracked the total number of campaign spots, it did not evaluate the campaign’s effectiveness. A DHS document outlining the 2013 and 2014 campaigns indicates DHS intended to evaluate the second campaign following its conclusion in October 2014. However, an official from DHS’s office of public affairs told us that DHS did not conduct the evaluation because of funding constraints. While evaluations certainly add cost, they are an important investment toward ensuring a campaign’s success. DHS therefore did not obtain feedback on the effectiveness of its efforts to dissuade migration during a year of record migration levels. Moreover, given that DHS does not currently have an active campaign, and does not plan to launch a new campaign until 2016 at the earliest, as much as 3 years or more may pass between DHS campaign evaluations. Similarly, while State has collected some information on its public outreach efforts, it has not evaluated the effectiveness of its information campaigns. As we noted earlier, State public affairs offices in all three countries used the DHS campaign materials and developed their own materials to launch related in-country public information campaigns. In Honduras, the public affairs office tracked how many Facebook users the campaigns reached, and how many of these users posted shares, likes, and comments. For example, the office conducted a campaign on Facebook and Twitter to discourage the public from hiring coyotes. According to the embassy’s public affairs office, the campaign reached more than 28,000 Facebook users, of which 1,765 posted shares, likes, or comments. In addition, State has conducted in-country focus groups and surveys that have informed some embassy public outreach efforts, particularly concerning the likely impact of certain public messages. However, according to public affairs officers we spoke to in all three countries, State has not evaluated the effectiveness of its actual in- country information campaigns. These public affairs officers told us they did not know what the impact of the campaigns was and believed it would be difficult to measure their impact. One public affairs officer said that the only information available on the campaigns’ impact is anecdotal. All three of these officers expressed either uncertainty or doubt concerning the effectiveness of past campaigns centered on the dangers of migration, indicating that it is uncertain whether such campaigns resonated with citizens of the three countries since the dangers were already well known or would not dictate a person’s decision to migrate. As we reported in the past, evaluating information campaigns on a regular basis is a good practice that leading organizations follow, as doing so is considered integral to a campaign’s success. Collecting this sort of performance information on media campaigns can provide value in informing future campaign efforts, particularly given DHS’s desire to launch a new campaign early next year. Moreover, despite their cost, evaluations are a key investment toward program success. Agencies have identified various challenges to sustaining programs intended to address the underlying causes of UAC migration. USAID, State, and IAF project documents outline various factors that can hamper the long-term sustainability of projects, such as lack of accountability within government institutions, lack of political will, low tax collection, poor market conditions, and limited private sector engagement. For example, one State/INL country plan notes the host government’s limited political will to combat corruption, which is included as a key assumption underlying police professionalization and reform efforts. Similarly, several USAID project documents acknowledge that certain efforts may be only partially sustainable over time because of challenges relating to the host government, such as limited funds or weakened public institutions. In addition, agency officials told us corruption within police and other institutions and extortion against local businesses challenge the sustainability of certain projects. We observed examples of how some of these factors have the potential to hamper assistance programs. For example, an interagency agreement between State and DOJ outlining efforts to train Honduran prosecutors includes an assumption that the government of Honduras would commit to having a certain number of prosecutors available for at least 18 months to participate in the program. However, at the time of our visit to the country, there were no active prosecutors participating in Tegucigalpa. In El Salvador, where we visited a vocational school that, according to USAID officials, had been established in a joint partnership between USAID and a Salvadoran private company, we observed a computer lab filled with computers recently provided by USAID but with no teacher present. According to USAID officials in El Salvador, the school had asked the Salvadoran Ministry of Education to provide a salary for the teacher, but it had not yet done so at the time of our visit. Agencies have outlined approaches for seeking to ensure program sustainability despite the challenges described above. State, USAID, IAF, and MCC project documents emphasize the importance of prioritizing improvements to government institutions; identifying sustainable funding sources, such as the private sector; and advocating for legislative and policy reforms that support program objectives. In addition, agency officials have noted the importance of involving communities, the private sector, and the police in program design to ensure they are invested in and supportive of programs’ objectives. For example, MCC’s compact with El Salvador outlines how MCC assistance over the duration of the compact will decrease to ensure that the government of El Salvador assumes an increasing percentage of related costs. IAF requires that its grantee partners contribute and mobilize their own resources for their projects and, according to IAF, it frequently works with its grantee partners to put in place strategic and financial plans beyond the period of the IAF grant. Some agency performance indicators seek to gauge progress toward meeting sustainability goals, such as by tracking the amount of private funds or other resources invested in community programs in target municipalities, the passage of laws that can facilitate key reforms, and the number of institutions with improved capacity as a result of the program. In recent years, the rapid increase in migration of unaccompanied alien children from Central America has highlighted the crises these children and their families face. U.S. agencies have efforts in place that seek to bring about lasting improvements in these countries and have taken actions with the goal of reducing migration. These actions include DHS’s and State’s support to Transnational Criminal Investigative Units that seek to disrupt and dismantle smuggling operations and to public information campaigns warning citizens of the dangers of migration and countering misinformation on U.S. immigration policy. The agencies have limited information, however, on the effectiveness of their efforts to reduce migration. While DHS/ICE has established categories of performance measures for its investigative units, and tracks basic statistical outputs associated with these categories, such as number of arrests, it has not established performance targets that could be used to gauge progress against preestablished goals for these important efforts to combat smugglers. In addition, DHS and State have collected limited information on the effectiveness of their public information campaigns, with DHS evaluating one of two campaigns and State evaluating none. In not evaluating its 2014 campaign in particular, DHS missed an opportunity to obtain valuable feedback on its efforts to dissuade migration during a year of record migration levels. Because DHS does not have an active campaign, it could go 3 years or longer between campaign evaluations. Such feedback could have offered insight on the effectiveness of past DHS campaign messages, and on their timing, particularly given migration levels had already peaked and reached record highs by the time DHS launched its 2014 Dangers of the Journey campaign in late June. This information would have been particularly valuable in informing any future DHS campaigns. State has conducted research that has informed its public outreach efforts. However, State has not evaluated its campaigns’ effectiveness. Timely feedback is critical as campaigns intended to deter migration that is cyclical in nature are time-sensitive. Moreover, given the increased presence of children in recent migration cycles, these campaigns need to be timed right and deliver appropriate messages. Carrying out ineffective campaigns could lead to higher levels of migration to the United States, which is not only potentially costly in terms of U.S. taxpayer resources but costly and dangerous to the migrants and their families. We recommend the following two actions to strengthen agency performance measurement related to deterring child smuggling. Specifically, we recommend that: The Secretaries of State and Homeland Security instruct appropriate agency public affairs officers to integrate evaluation into their planning for, and implementation of, future public information campaigns intended to dissuade migration, such as campaigns warning of the dangers of migration, providing facts on U.S. immigration policy, or conveying other messages. This could include ensuring that available migration data, such as DHS’s monthly data on UAC apprehensions, is used to inform the timing of these campaigns, and that the results of campaign evaluations are used to inform future campaigns to enhance their effectiveness. The Secretary of Homeland Security instruct DHS’s U.S. Immigration and Customs Enforcement to establish annual performance targets associated with the performance measures it has established for its Transnational Criminal Investigative Units. We provided State, USAID, DHS, IAF, MCC, and DOJ a draft of this report. State, USAID, DHS, and MCC provided written comments on the draft (see appendixes IV, V, VI, and VII, respectively). State concurred with the one recommendation directed to it, and DHS concurred with both recommendations. Specifically, State concurred with our recommendation that DHS and State integrate evaluation into their planning for, and implementation of, future public information campaigns intended to dissuade migration. State noted that it uses a variety of methods to determine the effectiveness and reach of information campaigns and will integrate evaluations into these methods. DHS also concurred with this recommendation and noted that it would consider and evaluate possible markers and metrics of success relevant to each campaign’s specific goals and target audiences. In addition, DHS concurred with our recommendation to establish annual performance targets associated with the performance measures it has established for its TCIUs. DHS also noted that it would work with host nation partners to establish goals to measure TCIU investigative activities and capacity development. State, DHS, IAF, and DOJ provided technical comments, which we have incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretaries of State and Homeland Security, the Acting Administrator of the U.S. Agency for International Development, the President of the Inter-American Foundation, the Chief Executive Officer of the Millennium Challenge Corporation, and the Attorney General of the United States. In addition, this report will be available at no charge on the GAO web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at GianopoulosK@gao.gov or 202-512-8612. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. In this report, we reviewed (1) U.S. assistance in El Salvador, Guatemala, and Honduras addressing agency-identified causes of unaccompanied alien child (UAC) migration, (2) how agencies have determined where to locate these assistance efforts, and (3) the extent to which agencies have developed processes to assess the effectiveness of programs seeking to address UAC migration. To address our objectives, we obtained written responses from agency officials identifying programs targeted at addressing agency-identified causes of the rapid increase in UAC migration. We then obtained and analyzed documentation for these programs. To understand broader strategies underlying these programs, we reviewed various strategic and planning documents specific to El Salvador, Guatemala, and Honduras, including the Department of State (State) and U.S. Agency for International Development (USAID) Integrated Country Strategies , USAID Country Development Cooperation Strategies (CDCS), State annual operating plans, and State Bureau of International Narcotics and Law Enforcement Affairs (INL) country plans for each country; the Partnership for Growth El Salvador—United States Joint Country Action Plan; and the Millennium Challenge Corporation (MCC) compact with El Salvador. We also reviewed several interagency strategic and planning documents regarding U.S. engagement with Central America, including the U.S. Strategy for Engagement in Central America, and the U.S. Strategy to Combat Transnational Organized Crime. We also interviewed U.S. agency officials in Washington, including officials from State, the Departments of Homeland Security (DHS) and Justice (DOJ), USAID, MCC, and the Inter-American Foundation (IAF). We also conducted fieldwork in El Salvador, Guatemala, and Honduras, where we interviewed officials from State, DHS, and USAID in each country, including the ambassadors and deputy chiefs of mission, the USAID mission directors, and DHS country attachés, and met with each embassy’s interagency UAC working group. In addition, we met with DOJ officials in El Salvador and Honduras, IAF officials and grantees in El Salvador and Guatemala, and an MCC official in El Salvador. During our fieldwork, we also interviewed representatives from host government agencies in each country, and from nongovernmental organizations in Washington and in El Salvador, Guatemala, and Honduras. We also visited U.S.-funded programs in Central America, including migration reception centers in El Salvador and Honduras. In addition, we met with children in all three countries who discussed their perspectives on causes of UAC migration, experiences, and participation in U.S.-supported programs. Their responses are nongeneralizable. To address our first objective on U.S. assistance in El Salvador, Guatemala, and Honduras addressing agency-identified causes of UAC migration, we reviewed agency responses to a set of questions we developed concerning what agencies identified as causes of the rapid increase in UAC migration, and agency actions taken in response. In addition, we asked officials we interviewed in Washington and in Central America to discuss the causes of the rapid increase in UAC migration, particularly seeking their perspective on what factors may have changed or emerged in recent years to cause the rapid increase. We reviewed all agency responses to determine the causes agency officials identified. We also asked these officials to discuss agency responses to UAC migration, including programs developed, modified, or planned in response, or programs that predated the increase in migration but that agencies identified as seeking to address underlying causes of migration. We also reviewed agency documents concerning these programs, including project concept papers and appraisal documents, progress-reporting documents including weekly and quarterly reports, and embassy reporting cables, among others. In addition, we obtained funding data from State, USAID, DHS, and IAF on agency funding to El Salvador, Guatemala, and Honduras from fiscal years 2012 through 2014 and funding for specific programs agencies identified as developed or modified in response to the rapid increase in UAC migration. We asked agencies a series of questions on how the funding data were produced, selected, and checked for accuracy, among other things. We determined these data were sufficiently reliable for our purposes. To address our second objective, on how agencies have determined where to locate programs that seek to address underlying causes of migration, we reviewed agency strategy documents, including those described earlier such as USAID and State country strategies, to determine agency strategic development objectives and, where applicable, priority geographic locations for these objectives. We also reviewed agency strategy and project documents to determine the locations of certain programs developed or modified in response to the rapid increase in UAC migration, including whether such programs were located in areas affected by high levels of UAC migration. We asked agency officials to discuss the factors they considered and prioritized in determining where to locate programs, including programs predating the rapid increase in UAC migration and programs developed in response to the rapid increase in migration, and the extent to which agencies considered communities identified as having high levels of UAC migration in locating such programs. In addition, we asked agency officials how they identified locations with high levels of UAC migration and their perspectives on the accuracy of available information on UAC locations of origin. We also obtained information, for each country, on the location of UAC communities of origin, of USAID and State/INL programs, and of IAF grantees. In particular, we obtained DHS data on communities DHS identified as having the highest levels of UAC migration between January 1, 2014, and May 15, 2014, for each country. These data included information on UAC locations of origin at the municipal level, and were used by DHS to create a map showing these DHS-identified top UAC locations of origin. DHS officials noted that there were inherent limitations in the accuracy of DHS apprehension data, which we discuss in this report and the report we issued on this subject in February 2015. To assess these data, we had a series of interviews with DHS officials to discuss the process and methodology by which they obtained these data on UAC apprehensions and used them to identify communities with the highest levels of UAC migration and create maps showing these results. We also discussed these DHS data with U.S. agency officials in Washington and Central America. These officials noted they found the DHS data to be generally accurate representations of their understanding of the top UAC locations of origin. Moreover, some of these officials noted that agencies also used the DHS maps in part to determine the extent to which their programs aligned with these top UAC locations of origin. Therefore, we determined that the DHS data on top UAC locations of origin were contextually relevant to the agencies’ own understanding of how the locations of their programs aligned with top UAC locations of origin. In order to analyze the locations from the DHS data, we combined the files to create a single master list of locations. We then matched this list to the GeoNet Names Server file from the National Geospatial Intelligence Agency to create a standardized list of names. We then aggregated the locations by department, and presented them in broad ranges. By aggregating these data by department and presenting them in map form in broad ranges, we believe they provide a reliable indication of the relative distribution of UAC-sending locations. We also obtained from USAID and State/INL the locations of all programs in each country, and, from IAF, the locations of IAF grantees in each country. As with the DHS data, we matched this list of programs and grantees to the GeoNet Names Server file from the National Geospatial Intelligence Agency to create a standardized list of names, then aggregated the locations by department, and presented them in broad ranges. We did not include some program locations in our data due to a lack of specificity in agency documents on the locations of certain programs. We provided this information to offer context on the location of UAC communities of origin and of U.S. agency programs and grantees. To address our third objective, on the extent to which agencies have developed performance indicators to assess the effectiveness of efforts responsive to UAC migration, we reviewed agency documents on programs agencies identified as responsive to the rapid increase in UAC migration, including agency evaluation policies and guides, country strategy and planning documents for each country, monitoring and evaluation plans, standard operating procedures, program evaluations, and quarterly and other progress reports. We also interviewed agency officials in Washington, D.C., and in El Salvador, Guatemala, and Honduras concerning steps they have taken to monitor and evaluate programs that seek to address causes of UAC migration. We also reviewed Standards for Internal Control in the Federal Government, and prior GAO work on performance measurement. In determining the importance of evaluating media campaigns as a good practice that leading organizations follow, we assessed various sources, including federal policies, Standards for Internal Control in the Federal Government, prior GAO reports on U.S. public diplomacy, and literature on practices for evaluating media campaigns. These sources outlined the importance of integrating evaluations into media campaigns and noted that while evaluations add cost, they are a worthwhile investment in campaign success. We conducted this performance audit from September 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Agencies we reviewed, including the U.S. Agency for International Development (USAID), the Departments of State (State) and Homeland Security ( DHS), and the Inter-American Foundation (IAF), identified overall funding allocated for programs in El Salvador, Guatemala, and Honduras for fiscal years 2012 through 2014 (see table 2). In addition to the funding information provided in table 2, the MCC signed a threshold program agreement with Honduras in fiscal year 2013 totaling $15.6 million, a compact agreement with El Salvador in fiscal year 2014 total $277 million, and a threshold program agreement with Guatemala in fiscal year 2015 totaling $28 million. Agencies we reviewed also provided funding information on programs they identified as most relevant to addressing unaccompanied alien child (UAC) migration. Specifically, agencies identified programs that were either developed or modified in response to the rapid increase in UAC migration, and provided information on total obligations and disbursements for these programs between fiscal years 2012 and 2014. As is indicated in table 3, State’s Bureau of International Narcotics and Law Enforcement noted it was unable to disaggregate UAC-targeted aspects of ongoing funding streams, and instead provided funding information for overall program areas that have been or will be used to address root causes and instances of UAC migration from El Salvador, Guatemala, and Honduras. Funding information for USAID, DHS, and IAF are provided in tables 4 through 6. Figures 10 through 21 provide country-level information on the number of apprehensions of unaccompanied alien children (UAC), by location of origin, as identified by the Department of Homeland Security (DHS) between January and May 15, 2014, and on the number of U.S. Agency for International Development (USAID), Department of State (State) and its Bureau of International Narcotics and Law Enforcement Affairs (INL), and Inter-American Foundation (IAF) programs and grantees in Guatemala, El Salvador, and Honduras. Each agency provided these data by municipality. These figures present these data aggregated at the departmental level. Some locations are not represented in these figures because of a lack of specificity in agency documents regarding the location of the program. Further, the programs were not weighted in any way, such as by number of beneficiaries or population served, or resources allocated to the program location. See appendix I for more information on how we obtained, analyzed, and presented this information. In addition to the contact named above, Judith Williams, Assistant Director; Joe Carney; Rachel Girshick; Claudia Rodriguez; Dina Shorafa; Ashley Alley; Martin De Alteriis; Seyda Wentworth; John Mingus; Oziel Trevino; and Lynn Cothern made key contributions to this report. | According to DHS, the number of UAC apprehended at the U.S.-Mexican border climbed from nearly 28,000 in fiscal year 2012 to more than 73,000 in fiscal year 2014, with nearly three-fourths of those apprehended nationals of El Salvador, Guatemala, and Honduras. Children from these three countries face a host of challenges, such as extreme violence and persistent poverty. Those who migrate can encounter even more dangers, such as robbery and abuse. GAO was asked to review issues related to UAC migration. In February 2015, GAO reported on U.S. assistance to Central America addressing the rapid increase in UAC migration. This report reviews (1) U.S. assistance in El Salvador, Guatemala, and Honduras addressing agency-identified causes of UAC migration; (2) how agencies have determined where to locate these assistance efforts; and (3) the extent to which agencies have developed processes to assess the effectiveness of programs seeking to address UAC migration. GAO reviewed agency documents and interviewed officials in Washington, D.C., and in Central America. U.S. agencies have sought to address causes of unaccompanied alien child (UAC) migration through recent programs, such as information campaigns to deter migration, developed in response to the migration increase and other long-standing efforts. The recent migration increase was likely triggered, according to U.S. officials, by several emergent factors such as the increased presence and sophistication of human smugglers (known as coyotes) and confusion over U.S. immigration policy. Officials also noted that certain persistent conditions such as violence and poverty have worsened in certain countries. In addition to long-standing efforts, such as U.S. Agency for International Development (USAID) antipoverty programs, agencies have taken new actions. For example, Department of Homeland Security (DHS)-led investigative units have increasingly sought to disrupt human smuggling operations. U.S. agencies have located programs based on various factors, including long-term priorities such as targeting high-poverty and -crime areas, but have adjusted to locate more programs in high-migration communities. For example, Department of State (State) officials in Guatemala said they moved programs enhancing police anticrime capabilities into such communities, and USAID officials in El Salvador said they expanded to UAC-migration-affected locations. Most agencies have developed processes to assess the effectiveness of programs seeking to address UAC migration, but weaknesses exist in these processes for some antismuggling programs. For example, DHS has established performance measures, such as arrests, for units combating UAC smuggling, but has not established numeric or other types of targets for these measures, which would enable DHS to measure the units' progress. In addition, DHS and State have not always evaluated information campaigns intended to combat coyote misinformation. DHS launched its 2013 campaign in April, but launched its 2014 campaign in late June after migration levels peaked. Neither agency evaluated its 2014 campaign. Collecting performance information on media campaigns can have value in informing future campaign efforts to reduce child migration. GAO recommends that DHS and State take steps to integrate evaluations into their planning for, and implementation of, future information campaigns intended to deter migration. GAO also recommends that DHS establish performance targets for its investigative units. DHS concurred with both recommendations, and State concurred with the one recommendation directed to it. |
NAFTA, which was agreed to by Canada, Mexico, and the United States in 1992 and implemented in the United States through legislation in 1993, contained a timetable for the phased removal of trade barriers for goods and services among the three countries. Beginning on January 1, 1997, Mexican passenger carriers that own and operate commercial buses and vans were to have been permitted to apply for the authority to provide scheduled service between Mexico and the United States. However, this increased access has not occurred because U.S.-Mexico negotiations concerning commercial motor vehicle safety measures to be implemented by Mexico have not been completed. In contrast, the U.S.-Canada border has been open to commercial passenger vehicles for many years. Until expanded access is granted, only commercial passenger vehicles from Mexico that are engaged in tour and charter service may travel beyond the U.S. commercial zones along the border (generally areas between 3 and 20 miles from the U.S. border towns’ northern limits, depending on each town’s population). As of May 1997, only seven Mexican companies had received FHWA operating authority to provide tour and charter services beyond the commercial zones. However, these and other Mexican commercial passenger vehicles may operate to any destination within the commercial zones. Commercial passenger vehicles entering the United States from Mexico include motor coaches, minibuses, school-bus-type vehicles, and vans (see fig. 1). Although there are 29 locations where commercial passenger vehicles from Mexico may enter the United States, about 85 percent of the commercial passenger vehicles enter at 4 major crossings: 2 in California (San Diego and Otay Mesa) and 2 in Texas (Hidalgo and Laredo). (See fig. 2.) Commercial passenger vehicles enter the United States through the U.S. Customs Service’s passenger vehicle ports of entry, which are physically separate from the crossings that commercial trucks use to enter the United States. Customs usually has one lane of passenger vehicle traffic dedicated to commercial passenger vehicles, which facilitates the processing of passengers through both the Customs and Immigration and Naturalization Service inspection points. To encourage safer commercial motor vehicle operation in the United States and to help achieve uniformity in commercial motor vehicle standards throughout the nation, FHWA has issued regulations on vehicle safety standards (e.g., tires, lights, brakes) and financial and operating standards (e.g., registration, insurance, commercial driver’s license, and hours of service requirements). FHWA’s safety regulations on commercial motor vehicles apply to, among other things, all vehicles designed to transport more than 15 passengers, including the driver, that operate within the United States. For the most part, the states have adopted the federal standards. FHWA maintains a presence in all states to promote commercial vehicle safety and ensures that state laws and regulations are compatible with federal commercial vehicle safety regulations. FHWA also provides policy direction and supports state-developed enforcement strategies through a motor carrier safety grant program. Although each commercial vehicle involved in interstate commerce on U.S. roads must meet all federal vehicle, operator, and financial standards, Canada, Mexico, and the United States have adopted roadside inspection procedures that focus on the most critical safety items. These inspection procedures, developed by the Commercial Vehicle Safety Alliance (CVSA),focus on those standards that, if not met, would lead to a commercial vehicle being placed out of service for serious safety violations. FHWA and state safety inspectors use these procedures when inspecting commercial passenger vehicles entering the United States from Mexico. Commercial passenger vehicles that are placed out of service are halted until needed repairs are made. Safety inspectors who are qualified to conduct inspections of commercial trucks are also qualified to inspect commercial passenger vehicles. There are two CVSA procedures that have been used to inspect commercial passenger vehicles entering the United States from Mexico: level-1 and level-2 inspections. The level-1 inspection is the most rigorous—a full inspection of both the driver and the vehicle. The driver inspection includes ensuring that the driver has a valid commercial driver’s license, is medically qualified, and has an updated log showing the driver’s hours of service. The vehicle inspection includes a visual inspection and an extensive undercarriage inspection that covers the brakes, frame, and suspension. The level-2 inspection is similar to the level-1 inspection, except that it does not include an extensive undercarriage inspection. Customs data show that, from June 1996 through May 1997 (the latest data available), there were an average of about 598 northbound commercial passenger vehicle crossings each day along the U.S.-Mexico border (see table 1). However, counting practices vary somewhat among the ports of entry and, as a result, the traffic levels reported by Customs are understated by an unknown amount. At the San Diego and Otay Mesa, California, crossings, all commercial passenger vehicles, regardless of the vehicle capacity, are funneled through a single lane for commercial passenger vehicles. However, in Texas, Customs agents require commercial vans to use the lanes provided for private passenger vehicles. They told us that they do not always attempt to determine whether the vans are commercial or private passenger vans, which results in some commercial vans being counted as private passenger vehicles. Customs officials in Texas told us that the traffic counts are used primarily to determine the level of staffing that is needed at each crossing point. They also told us that they permit commercial passenger vans to enter the United States through private passenger lanes because, given their smaller size, the vans do not require as much inspection as motor coaches do. Also, according to a Customs official, it is difficult for Customs agents to identify whether some vans are carrying paying passengers or private passengers. While Customs records the number of crossings, it does not keep records on the number or type of individual commercial passenger vehicles that cross the border. Customs, FHWA, and state officials told us that they believe that most of the northbound cross-border commercial passenger traffic is of a repeat nature, such as airport and shopping center shuttle services. Thus, while Customs’ records show an average of 598 crossings daily, the number of individual commercial vehicles is smaller, but, again, to an unknown degree. In California, federal and state officials told us that most traffic at the San Diego crossing consists of motor coaches and school-bus-type vehicles providing shuttle service to destinations such as bus terminals, grocery stores, and parking lots just inside the U.S. border. Federal officials stated that few commercial vans enter the country at the San Diego crossing. Rather, most vans, such as those providing shuttle service to the San Diego airport, enter at the Otay Mesa crossing. Officials told us that some commercial passenger vehicles at both crossings may enter the United States up to 10 times a day. In Texas, federal officials told us that most of the Laredo cross-border traffic consists of U.S.-based carriers providing scheduled service to Dallas and Houston. A Customs official in Laredo estimated that while only 4 or 5 commercial passenger vans cross the border on weekdays, approximately 50 or 60 vans cross the border during the weekend. According to one Customs official, most commercial passenger vehicles at the Hidalgo, Texas, crossing are motor coaches; an estimated 90 percent of these vehicles travel to destinations within the commercial zone to the nearby border city of McAllen. Relatively few safety inspections of commercial passenger vehicles have taken place in the past year. FHWA inspectors in Texas and state inspectors in California conducted border safety inspections of 528 commercial passenger vehicles from January through May 1997 out of an estimated 90,000 border crossings during that period. (Because many commercial passenger vehicles may enter the United States several times a day, inspectors would not typically inspect the same vehicle each time it crossed the border.) About 22 percent of the vehicles inspected were placed out of service. Some of these were vehicles owned and operated by U.S. carriers. In comparison, the out-of-service rate for the 10,000 U.S. commercial passenger vehicles inspected on the nation’s roads from October 1996 through June 1997 was about 10 percent. FHWA inspectors in California and state inspectors in Texas had not conducted any inspections as of May 1997. The dearth of safety inspections, coupled with insufficient information on the number and kinds of Mexican commercial passenger vehicles entering the United States, precludes any assessment of whether these commercial passenger vehicles are safe and are being operated safely. FHWA and state officials told us that because many more commercial trucks enter the United States from Mexico than do commercial passenger vehicles, they spend most of their time inspecting commercial trucks. About 12,000 commercial truck crossings occur along the border each day compared with about 598 commercial passenger vehicle crossings (a 20-to-1 ratio). Moreover, about 45 percent of the 25,000 trucks inspected upon entering the United States from Mexico were placed out of service for serious safety violations in calendar year 1996. FHWA is not conducting safety inspections of commercial passenger vehicles entering California from Mexico. According to an FHWA official in California, the two federal inspectors assigned to the California border are focusing all of their inspection efforts on the commercial trucks entering the United States from Mexico because (1) these trucks continue to display serious violations of insurance and operating authority requirements and (2) congestion at the border crossings does not allow adequate space for vehicle inspections to be conducted. California state safety inspections of commercial passenger vehicles entering the United States from Mexico have been limited to two 1-day strike force efforts (see fig. 3). In total, the California Highway Patrol conducted level-1 inspections of 144 vehicles and placed 37 (26 percent) vehicles out of service for serious safety violations, such as steering or brake problems. During the first strike force effort on April 20, 1997, safety inspectors inspected vehicles near the passenger vehicle border crossings at San Diego and Otay Mesa. At the San Diego crossing, state officials directed commercial passenger vehicles to stop at a curbside about 1 mile from the border crossing for inspection because space was insufficient to conduct vehicle inspections at the Customs border crossing (see fig. 3). For vehicles crossing at Otay Mesa, state officials diverted commercial passenger vehicle traffic from the passenger crossing to the state truck inspection facility about 1 mile away. The second strike force took place at two federal Immigration and Naturalization Service border patrol posts just north of San Diego on April 26, 1997. A California Highway Patrol official stated that future border inspections of commercial passenger vehicles will depend on funding increases because current staffing levels are not sufficient for increased inspection activity. FHWA inspectors primarily conducted level-2 safety inspections of commercial passenger vehicles in Texas from January through May 1997 (see fig. 4). They also have conducted several strike forces. In total, FHWA inspectors inspected 384 commercial passenger vehicles and placed 80 (21 percent) of them out-of-service for serious safety violations. The eight FHWA safety inspectors assigned to the Texas border are responsible for inspecting both commercial trucks and commercial passenger vehicles that enter the United States from Mexico. They have devoted about one-eighth of their time to commercial passenger vehicle inspections. Over a 2-week period in February 1997 at the Hidalgo and Pharr crossings, the first FHWA strike force conducted level-2 safety inspections of 132 vehicles arriving from Mexico. Twenty-eight (21 percent) of these vehicles were placed out of service for serious safety violations, such as inoperative brakes or air suspension problems. Of these 28 vehicles, 24 were owned and operated by U.S. carriers, 17 of them by a single U.S. company. FHWA conducted two other strike forces in Laredo, one to identify commercial passenger van traffic patterns and another to address U.S. carrier complaints about alleged illegal van operators. The strike forces conducted document checks (e.g., proof of vehicle registration, operator’s license, and insurance) of vans entering the United States. A 3-day effort beginning on Good Friday and ending Easter Sunday, a holiday weekend that FHWA officials believed would see an increase in cross-border van activity, proved uneventful. Traffic was extremely light, and FHWA inspectors found only two violations. During a 3-week strike force in April and May 1997, FHWA inspectors cited 11 van operators with 22 violations for lack of proof of insurance or registration. All of the vans cited were owned and operated by U.S. carriers. FHWA investigators discovered that these vehicles were operating without proper insurance coverage or Department of Transportation operating authority. FHWA assessed these van operators a total of $32,000 in penalties for these violations. As a result of these findings, FHWA has directed its inspectors at the border crossings to increase their focus on both domestic and Mexican vans, as opposed to larger commercial passenger vehicles, when conducting their commercial passenger vehicle inspections. Texas safety inspectors are not inspecting commercial passenger vehicles arriving from Mexico because (1) their priority is to inspect commercial trucks entering the United States from Mexico, (2) FHWA is currently inspecting commercial passenger vehicles at the border, and (3) they need a budget for these activities from the state legislature and inspection locations that provide for passenger safety while inspections are taking place. FHWA also investigates foreign and domestic commercial passenger carriers for violations of federal regulations, such as operating authority requirements, in response to complaints filed by U.S. carriers. In Texas, FHWA officials are addressing five commercial passenger carriers alleged to be Mexican carriers operating beyond U.S. commercial zones without federal operating authority. According to the FHWA official responsible for following up on these allegations, FHWA has determined that all of these companies are U.S. companies. An FHWA official in California told us that no complaints about alleged illegal Mexican carriers have been filed with the agency in that state. FHWA has provided Mexican officials with guidance on operating and safety requirements for commercial passenger vehicles. For example, an FHWA official in Arizona told us that on several occasions he spoke to the Mexican Consulate in Nogales in response to requests for information on the requirements and regulations applicable to a tour and charter operator that wanted to transport a group to Disneyland. The FHWA official told us he sent the Consulate a package of information on obtaining proper operating authority, applicable safety regulations, and other requirements. In Texas, an FHWA official prepared a bilingual packet of information containing operating and safety requirements for Mexican commercial vehicles and presented it to Mexican officials from the state of Tamaulipas. We provided the Department of Transportation with a draft of this report for review and comment. We met with officials including the national motor coach program coordinator in FHWA’s Office of Motor Carriers, the special assistant to the associate administrator in the Office of Motor Carriers, and a senior analyst in the Office of the Secretary. DOT generally agreed with the contents of the draft report. DOT also offered several technical and clarifying comments, which we incorporated where appropriate. To achieve our first objective, we obtained the U.S. Customs Service’s commercial passenger vehicle traffic data for the period from June 1996 through May 1997. We also visited seven border crossings, where almost 90 percent of the commercial passenger vehicles from Mexico enter the United States. We discussed the nature of cross-border commercial passenger vehicle traffic with Customs, Immigration and Naturalization Service, Department of Transportation, and state commercial vehicle enforcement officials in Arizona, California, New Mexico, and Texas. We also discussed cross-border traffic with university researchers. To achieve our second objective, we discussed inspection practices with Department of Transportation officials and state enforcement officials in Arizona, California, New Mexico, and Texas. We observed federal commercial passenger vehicle inspection activity in Texas and state commercial passenger vehicle inspection activity in California. We obtained commercial passenger vehicle inspection reports from Department of Transportation and California Highway Patrol officials. We also met with the Texas Bus Association, the American Bus Association, and several U.S. bus company officials to discuss cross-border safety issues involving commercial passenger vehicles. With the exception of not verifying Customs’ cross-border crossing data and inspection results reported by FHWA and California, we performed this work in accordance with generally accepted government auditing standards. We performed our work from January 1997 through July 1997. We are sending copies of this report to congressional committees with responsibilities for transportation issues; the Secretaries of Transportation and the Treasury; the Administrator, FHWA; the Director, Office of Management and Budget; and the Commissioner, U.S. Customs Service. We will also make copies available to others. If you or your staff have any questions about this report, please contact me at (202) 512-3650. Major contributors to this report were Marion Chastain, Paul Lacey, James Ratzenberger, Deena Richart, and Angel Sharma. Phyllis F. Scheinberg Associate Director, Transportation Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed whether commercial passenger vehicles entering the United States from Mexico are meeting U.S. safety standards, focusing on: (1) the number and types of commercial passenger vehicles entering U.S. border states from Mexico; and (2) actions taken by the Federal Highway Administration (FHwA) and U.S. border states to provide safety inspections for commercial passenger vehicles arriving at the U.S.-Mexico border. GAO noted that: (1) according to the U.S. Customs Service, there were about 218,000 commercial passenger vehicle crossings from Mexico to the United States, a daily average of 598 crossings, from June 1996 through May 1997, the latest data available; (2) about 85 percent of these crossings occurred at four crossing points, two in California and two in Texas; (3) while Customs records the number of vehicle crossings from Mexico into the United States, many of these vehicles may cross the border several times a day (e.g., airport shuttles) and each crossing is included in Customs' vehicle crossing count; (4) furthermore, Customs does not record the identity of individual vehicles, the type of vehicle (e.g., motor coaches or vans), or whether the vehicle is owned by either a U.S. or Mexican carrier; (5) as a result, no reliable information exists either on the actual number of Mexican-owned commercial passenger vehicles that enter the United States or on how many of each type of vehicle enters the country--information needed to assess the extent to which these vehicles are safe and are operated safely; (6) FHwA and state inspectors have carried out few safety inspections of commercial passenger vehicles entering the United States from Mexico primarily because their emphasis has been on inspecting commercial trucks; (7) FHwA inspectors in Texas and state inspectors in California conducted border safety inspections of 528 commercial passenger vehicles from January through May 1997 out of an estimated 90,000 crossings; (8) about 22 percent of these commercial passenger vehicles were placed out of service for serious safety violations, such as steering or brake problems; (9) FHwA inspectors in California and state inspectors in Texas had not conducted any inspections as of May 1997; and (10) the dearth of safety inspections, coupled with insufficient information on the number and kinds of Mexican-owned commercial passenger vehicles entering the United States, precludes any assessment of whether these commercial passenger vehicles are safe and are being operated safely. |
The federal government uses direct loans and loan guarantees as tools to achieve numerous program objectives, such as assistance for housing, farming, education, small businesses, and foreign governments. At the end of fiscal year 1997, the Department of the Treasury reported that the federal government’s gross direct loans outstanding totaled $216.6 billion, and loan guarantees outstanding totaled $712.4 billion. Before enactment of the Federal Credit Reform Act of 1990, credit programs—like most other federal programs—were recorded in budgetary accounts on a cash basis. While this basis reflected cash flows, it distorted the timing of when costs would actually be recognized and, thus, distorted the comparability of credit program costs with other programs intended to achieve similar purposes, such as grants. For example, for direct loans, the budget generally showed budget authority and outlays for loans disbursed that exceeded repayments received from all past loans in that year. Therefore, in the budget a direct loan in the first year of a program was equivalent to the cost of a grant. Cash-basis budgetary recording also suggested a bias in favor of loan guarantees over direct loans. Loan guarantees appeared to be free in the short-term because cash-basis recording did not recognize that some loan guarantees result in costs due to default of the underlying loans. FCRA changed the budgetary treatment of credit programs beginning with fiscal year 1992 so that their costs could be compared more appropriately with each other and with the costs of other federal spending. FCRA requires that agencies have budget authority to cover the program’s cost to the government in advance, before new direct loan obligations are incurred and new loan guarantee commitments are made. The act therefore requires agencies to estimate the cost of extending or guaranteeing credit, called the subsidy cost. This cost is the present value of disbursements—over the life of the loan—by the government (loan disbursements and other payments) minus estimated payments to the government (repayments of principal, payments of interest, other recoveries, and other payments). For loan guarantees, the subsidy cost is the present value of cash flows from estimated payments by the government (for defaults and delinquencies, interest rate subsidies, and other payments) minus estimated payments to the government (for loan origination and other fees, penalties, and recoveries). FCRA assigned to OMB the responsibility to coordinate the cost estimates required by the act. OMB is authorized to delegate to lending agencies the authority to estimate costs, based on written guidelines issued by OMB. These guidelines are contained in sections 33.1 through 33.12 of OMB Circular No. A-11, and supporting exhibits. The Federal Accounting Standards Advisory Board (FASAB) developed the accounting standard for credit programs, Statement of Federal Financial Accounting Standards No. 2, Accounting for Direct Loans and Loan Guarantees (SFFAS No. 2), which became effective with fiscal year 1994. This standard, which generally mirrors FCRA, established guidance for estimating the cost of direct and guaranteed loan programs, as well as for recording direct loans and the liability for loan guarantees for financial reporting purposes. SFFAS No. 2 states that the actual and expected costs of federal credit programs should be fully recognized in both budgetary and financial reporting. To accomplish this, agencies first predict or estimate the future performance of direct and guaranteed loans when preparing their annual budgets. The data used for these budgetary estimates are generally reestimated after the fiscal year end to reflect any changes in actual loan performance since the budget was prepared, as well as any expected changes in assumptions and future loan performance. This reestimated data is then used to report the cost of the loans disbursed under the direct or guaranteed loan program as a “Program Cost” on the agencies’ Statement of Net Costs after loans are disbursed. Agency management is responsible for accumulating sufficient, relevant, and reliable data on which to base the estimates. Further, SFFAS No. 2 states that agencies should use the historical experience of the loan programs when estimating future loan performance. To accomplish this, agencies use cash flow models based on various assumptions, often referred to as cash flow assumptions, such as the number and amount of loans that will default in a given year—known as the default assumption. Those assumptions that have the greatest impact on the credit subsidy vary by program and are often referred to as key cash flow assumptions. Statement on Auditing Standards No. 57 states that auditors should evaluate the reasonableness of estimates in the context of the financial statements taken as a whole. As part of the annual financial statement audits, agency cash flow models and assumptions are assessed to determine if management has a reliable basis for its credit subsidy estimates. In 1997, the Credit Reform Task Force of the Accounting and Auditing Policy Committee was formed in order to address key issues surrounding the implementation of FCRA and the related federal accounting standard. This task force developed a Technical Release, Preparing and Auditing Direct Loan and Loan Guarantee Subsidies Under the Credit Reform Act, which has been approved by FASAB and is expected to be issued by OMB during fiscal year 1999. This Technical Release identifies specific practices that, if fully implemented by credit agencies, will enhance their ability to reasonably estimate loan program costs. These practices include the following: Accumulating sufficient, relevant, and reliable supporting data that provides a reliable basis for agencies’ estimates of future loan performance. For example, to make reasonable projections of future loan defaults, recoveries, prepayments, or other key cash flows, agencies should use reliable records of historical experience and take into consideration current and forecasted economic conditions. Conducting periodic comparisons of estimated loan performance to actual cash flows in the accounting system. This comparison allows agencies to identify and research significant differences and determine whether assumptions related to expected future loan performance need to be revised. Calculating timely reestimates, based on the most recently available data, of the loan program’s cost and including the reestimates in the current year’s financial statements and budget submissions. By performing timely reestimates, agencies are including their best estimate of a loan program’s cost in the agency’s financial statements and budget submissions. Comparing cash flow models to legislatively mandated program requirements to ensure that current cash flow models reasonably represent the cash flows of the loan program based on the laws and regulations that govern them. Coordinating estimates of loan program cost among the budget, accounting, and program staff. These officials should work together to ensure that various practices, including those described above, are implemented and operating effectively and that all key assumptions have been coordinated and reviewed by the budget, accounting, and program offices. Performing sensitivity analyses to identify which cash flow assumptions, such as defaults, recoveries, or prepayments, have the greatest impact on the cost of the loan program. Knowledge of these key assumptions provides management with the ability to monitor the economic trends that most affect the loan program’s performance. These analyses also allow agencies to more efficiently focus their efforts on providing support for the key assumptions, which need to be documented to pass the test of an independent audit. Ensuring that agency cash flow models are well organized, documented, and, to reduce the chance of errors, require minimal data entry. This documentation should include the rationale for using the specific model, the mechanics of the model, including formulas and other mathematical functions, and sources of supporting data. Establishing formal policies and procedures for calculating estimates of loan program cost, including a formal review process. Documented policies and procedures, as well as a formal review process, are important internal controls that are designed to help ensure continuity when there is employee turnover and to calculate reasonable, well-supported cost estimates. During the summer of 1998, in response to our report on the fiscal year 1997 governmentwide consolidated financial statement audit, OMB directed agencies that did not receive an unqualified opinion on their financial statements to develop action plans to address identified financial management weaknesses. As a result, three of the five agencies in our review, HUD, VA, and USDA, prepared action plans to address, among other things, problems with preparing reasonable estimates of their loan program costs. Because SBA and Education received unqualified opinions on their fiscal year 1997 financial statements, these agencies were not required to, and did not, prepare formal action plans. In a March 1998 report on credit reform estimation problems, we indicated that the five key credit agencies had problems estimating the subsidy cost of credit programs. During this prior review, we examined data for the same 10 programs (listed in the “Objectives, Scope, and Methodology” section) discussed in this report to identify trends and causes for the changes in subsidy estimates. The resulting report noted that the lack of timely reestimates, as well as the frequent absence of documentation and reliable information, limited the ability of agency management, OMB, and the Congress to exercise intended oversight. The report contained broad recommendations for improving oversight of credit reform implementation including ensuring that (1) estimates are prepared accurately and (2) documentation supporting subsidy estimates included in the budget and financial statements is prepared and retained. Appendix I provides additional background information on estimating the cost of credit programs. Our objectives were to assess (1) the ability of agencies’ to reasonably estimate the cost of their loan programs, including whether they used practices identified in the Credit Reform Task Force’s Technical Release as being effective in making these estimates and (2) the status of agencies’ efforts to ensure that computer systems used to estimate the cost of credit programs are Year 2000 compliant. We selected a sample of 10 programs—5 direct loan programs totaling $52.1 billion and 5 guaranteed loan programs totaling $558.1 billion—from the five agencies with the largest domestic federal credit programs: the Small Business Administration and the Departments of Education, Housing and Urban Development, Veterans Affairs, and Agriculture. We generally selected programs that had the most credit outstanding or highest loan levels at each agency. Specifically, these programs were the following: 7(a) General Business Loans Program and Disaster Loan Program, which totaled 72 percent of SBA’s loan guarantees and 73 percent of its direct loans, respectively; Federal Family Education Loan Program and William D. Ford Direct Loan Program, which totaled 100 percent of Education’s loan guarantees and 50 percent of its total loans receivable, respectively; Mutual Mortgage Insurance Fund and General and Special Risk Insurance Fund Section 223(f) Refinance, which totaled 81 percent of HUD’s loan guarantees; Guaranty and Indemnity Fund and the Loan Guaranty Direct Loan Program, which totaled 100 percent of VA’s post credit reform loan guarantees and 69 percent of its total loans receivable, respectively; and Farm Service Agency Farm Operating Loans Program and Rural Housing Service Single Family Housing Program, which totaled 20 percent of USDA’s direct loans. Generally, to accomplish these objectives, we evaluated the process the agencies used to estimate the cost of their loan programs during fiscal year 1997, including whether the agencies used practices outlined in the Credit Reform Task Force’s Technical Release that enhanced their ability to reasonably estimate loan program costs. In addition, we determined whether agencies had a reliable basis for the underlying assumptions for their estimates of loan program performance by assessing the support for key cash flow assumptions. We used the financial statement audit work of the respective agency auditors as a starting point for our analyses. Further, we obtained information on the status of agencies’ efforts to ensure that computer systems used to estimate the cost of credit programs are Year 2000 compliant. Our work was conducted in Washington, D.C., and St. Louis, Missouri, from September 1997 to November 1998 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the following officials or their designees: the Administrator of Small Business and the Secretaries of Education, Housing and Urban Development, Veterans Affairs, and Agriculture. All of the entities provided written comments, which are discussed in the respective “Agency Comments and Our Evaluation” sections of this report and are reprinted in appendixes III through VII. Further details of our objectives, scope, and methodology are in appendix II. For the 10 credit programs at the five key credit agencies we reviewed, only SBA and Education were able to reasonably estimate the cost of their credit programs for financial reporting purposes and received unqualified opinions on their fiscal year 1997 financial statements. However, the data that Education used to prepare its budget estimates, which were different from the data used to prepare its financial statements, had not been validated. Further, SBA made errors in the reestimate it submitted for budget purposes. HUD, VA, and USDA were not able to prepare reasonable estimates, which contributed to their qualified opinions or disclaimers of opinion on their fiscal year 1997 financial statements. These problems also call into question the reliability of the loan program data these agencies submitted to the Congress for future budget decisions. HUD, VA, and USDA have prepared action plans to correct some of their loan cost estimation problems. In addition, during 1998, HUD focused considerable effort on making reasonable cost estimates of its loan programs. Further, while not required to prepare formal action plans, both SBA and Education have planned or acted to correct deficiencies in their loan estimation process. SBA based its fiscal year 1997 estimates of loan program costs on reliable records of historical loan performance data and was therefore able to make a reasonable estimate of the cost of these programs on its fiscal year 1997 financial statements. However, for the two programs we reviewed, SBA initially made errors in the reestimates of its loan program’s costs, which its independent public accountant uncovered, including using incorrect discount rates, which required large adjustments to the draft financial statements. As a result of making these adjustments, SBA received an unqualified opinion on its fiscal year 1997 financial statements. However, because the fiscal year 1997 budgetary reestimate was included with the fiscal year 1999 President’s budget prior to the audit adjustments, the budgetary reestimate contained erroneous data. Since the inception of credit reform in 1992, SBA has placed significant emphasis on gathering reliable key cash flow data and, with OMB’s assistance, developed sophisticated cash flow models to estimate future loan performance and cost. Beginning in 1992, SBA devoted considerable resources to evaluating its existing financial management systems and determining what modifications would be necessary to allow it to reasonably estimate loan program costs under credit reform. Since these initial efforts, SBA has continued to further refine its estimates of loan program costs and the related underlying assumptions. For example, during 1997, SBA hired consultants to study and develop refined loss and recovery estimates for the Disaster Loan Program. SBA followed a number of practices that enhanced its ability to make reasonable financial statement and budgetary estimates of loan program costs for the two programs we reviewed. For example, SBA developed an extensive database of historical cash flow information, which provided a reliable basis for its estimates of credit program costs. Further, SBA, with assistance from OMB, established sophisticated, well organized cash flow models that, when compared with actual historical data, reasonably estimated future loan performance. Because of this database and SBA’s sophisticated cash flow model, SBA was able to calculate reasonable estimates of loan program costs without significant manual intervention or requests for data from outside entities. SBA also routinely compared estimated loan performance to actual costs recorded in the accounting system to assess the reasonableness of its estimates of future loan performance and costs. Finally, in preparing their estimates for the two programs we reviewed, individuals from SBA’s program, budget, and accounting offices coordinated their work. However, during the audit of SBA’s fiscal year 1997 financial statements, the independent public accountants identified material internal control weaknesses related to estimating the cost of credit programs. Specifically, the independent public accountant reported that incorrect data, including discount rates, were used in the 1997 reestimate, and errors existed in some of the reestimated cash flow models. In aggregate, these errors resulted in SBA recording over $221 million in adjustments to its financial statements, which enabled the independent public accountant to render an unqualified opinion. However, these adjustments were not identified until after SBA submitted its fiscal year 1999 budget to OMB and, as a result, this budget submission and the President’s budget misstated the cost of these loan programs. Further, the independent public accountant reported that SBA lacked adequate internal controls over the estimation process. For example, SBA did not retain the cash flow models for one of the programs we reviewed for fiscal years 1992 through 1997. Although these models are normally a part of performing the reestimates, and should be retained as a matter of routine record-keeping, SBA was able to calculate a reasonable financial statement reestimate by using a more recent cash flow model. Because SBA received an unqualified audit opinion on its fiscal year 1997 financial statements, OMB did not require SBA to prepare a formal action plan to address the weaknesses identified in the estimation process. However, in the audit report on the fiscal year 1997 financial statements, the independent public accountant made recommendations, with which we concur, which addressed the material internal control weaknesses described above including developing formal policies and procedures for estimating the cost of credit programs and implementing a formal supervisory review process to identify and correct potential errors. In response to these recommendations, SBA developed and implemented formal policies and procedures including a formal supervisory review process. In addition, SBA adopted other practices, such as performing sensitivity analyses and calculating its fiscal year 1998 reestimate earlier than the fiscal year 1997 reestimates, to allow sufficient time to include any necessary adjustments resulting from the audit in the data presented in the President’s budget. These actions should help SBA ensure that future errors are detected and corrected promptly and that budgetary and financial estimates of loan program costs are reasonable. SBA agreed with the findings in this report. (SBA’s comments are reprinted in appendix III.) The Department of Education was able to prepare reasonable credit program estimates for its fiscal year 1997 financial statements, based on information obtained through a significant data gathering effort from its guaranty agencies. However, the audited estimates differed materially from the credit subsidy estimates based on Education’s own database, which raises questions about the validity of Education’s database. Further, Education’s credit program estimates for its budget submission for these two programs were based on the questionable data from its own database. Until the information in the existing database is determined to be reliable, Education will continue to expend considerable time and resources in order to make reasonable loan program cost estimates. While the IG concluded that Education’s fiscal year 1997 financial statement estimates were reasonable, several internal control weaknesses were reported. For example, the IG reported that Education needed to establish the validity of its principal database, the National Student Loan Data System (NSLDS), to provide a basis for preparing reliable loan estimates and to establish sufficient controls to detect material errors in its loan estimates. Data from NSLDS were used to prepare the fiscal year 1997 budgetary estimates because Education’s staff believed that the data were reliable. However, the weaknesses found in Education’s internal controls raise questions about the quality of this database. Based on Education’s continuing validation efforts, it believed that estimates based on data from NSLDS would be similar to estimates based on data received from the guaranty agencies. Education therefore planned to use the guaranty agencies’ data to validate the estimates that were based on NSLDS data. As part of this plan, Education prepared two estimates for financial statement purposes—one based on data from NSLDS and the other based on data from the guaranty agencies. The IG audited the estimates based on the data provided by the guaranty agencies and concluded that these estimates differed materially from the estimates based on Education’s own database. Further, the IG reported that “Education’s own procedures for preparing its loan estimates were not sufficiently rigorous to detect this material misstatement” and that “Education’s ability to continue to prepare auditable loan estimates for its financial statements depends on establishing a reliable store of up-to-date historical loan data.” The IG audited the guaranty agency-based estimates and found them to be reasonable. However, obtaining data from the guaranty agencies was a time-consuming way for Education to develop its estimates of loan program costs and should be viewed as a short-term solution only. It is not feasible for Education to carry out this process year after year instead of relying on the principal system that was designed to provide this information. In addition to these data problems, the auditors identified some instances where Education’s estimation practices could be improved. For example, Education did not have documented policies and procedures for calculating the cost of credit programs, including a formal review process, to routinely identify and correct potential errors. These important internal controls could help ensure the reasonableness of Education’s complex estimates in the future. While Education did some limited sensitivity analyses, it did not document the analyses. Such documented analyses would have allowed its auditors to focus their audit procedures on the key cash flow assumptions. For fiscal year 1997, the IG audited more than 20 cash flow assumptions to validate the assumptions used by the model to estimate the cost of the loan programs. In the future, conducting a complete sensitivity analyses and documenting the results could save significant time and effort by helping focus management and the auditors’ attention on the reasonableness of the assumptions that have the greatest impact on the program’s cost. Using guaranty agency data, Education was able to calculate reasonable financial statement estimates of credit program costs, partly due to the sophistication of its credit subsidy model. Education developed its own method for estimating and reestimating credit program costs, called the Budget Loan Model (BLM) System. This model uses a series of assumptions to estimate cash flows over the life of the loans. BLM is used in concert with the OMB credit subsidy model because Education believes it captures the unique requirements of its program. During the fiscal year 1997 audit, BLM was reviewed and the auditors determined that BLM included and modeled all key elements of Education’s various loan program requirements, such as loan term and repayment grace periods, and used historically valid data obtained from the guaranty agencies. In addition to developing its sophisticated cash flow model, Education followed other effective practices for cost estimation, such as documenting its cash flow model, by developing a Technical Manual and User’s Guide. Also, for financial statement purposes, agency staff compared estimates of future loan performance, based on the data obtained from the guaranty agencies, to actual costs recorded in the accounting system and determined that the financial statement estimates reasonably predicted future loan performance. Further, calculating the estimates of loan program costs at Education was a coordinated effort between the accounting, budget, and program staff. For example, representatives from these three offices met on a regular basis and jointly developed the cash flow assumptions used in the financial statement estimates. In addition, Education compared the cash flow models to program requirements and determined that its models accurately captured all material aspects of the credit programs. Finally, Education also calculated timely reestimates for both budgetary and financial statement purposes. However, as discussed previously, the NSLDS data used for the budget reestimates were questionable. Since Education received an unqualified audit opinion on its fiscal year 1997 financial statements, the agency was not required by OMB to prepare a formal action plan to address any financial management issues. However, according to Education officials, Education has efforts underway to address the challenges it faces in preparing reasonable estimates of its loan program costs. For example, Education is continuing its efforts to review and correct inaccurate or incomplete data in NSLDS, including providing detailed technical instructions to data originators and providers (schools, lenders, and guaranty agencies). These efforts should help Education address its major loan cost estimation challenges. However, until these efforts are successfully completed and Education can rely on the data in NSLDS, it will be forced to repeatedly undergo an onerous process in order to make reasonable estimates of loan program costs. In its June 1998 audit report on Education’s fiscal year 1997 financial statements, the IG made several recommendations, which we concur with, related to improving the agency’s loan cost estimation process. Specifically, the IG recommended that Education (1) maintain documentation of the source of the data used in developing assumptions for its cash flow models and the models themselves, (2) validate the data used in the models, (3) update data annually to reflect the current activity, (4) perform and document sensitivity analyses to identify factors that significantly impact the loan estimates or may vary in the future as well as factors that rely on assumptions not based on current data, (5) establish clearly defined roles and responsibilities for staff and groups responsible for developing estimates of Education’s loan programs, (6) develop formalized policies and procedures for estimating the cost of credit programs, and (7) perform quality assurance reviews of loan estimates and document the results of these reviews. Education agreed with these recommendations and as discussed above has acted or plans to act to address these recommendations. Education stated that it does not believe our report provides a basis to conclude that data from NSLDS are of questionable validity. They further stated that NSLDS data were “highly comparable” to data received from the guaranty agencies and that adjustments made to the loan cost estimates as a result of the fiscal year 1997 financial statement audit process were also reflected in the agency’s budget forecasts. Education therefore concluded that the budget estimates were “highly reliable.” We disagree. Our conclusion that the data from NSLDS are of questionable validity is based on our review of the IG’s fiscal year 1997 financial statement audit report and supporting work papers. We also held numerous discussions with IG staff responsible for the audit. Based on this work, we determined that (1) the data in NSLDS have never been validated by the agency, despite the fact that the IG, beginning with the fiscal year 1995 audit, recommended this be done in order to provide a basis for preparing reasonable loan cost estimates, (2) material differences were noted by IG staff between the data in NSLDS and that provided by the guaranty agencies, and (3) the adjustments made to the loan cost estimates as a result of the fiscal year 1997 audit were made during the summer of 1998—several months after the fiscal year 1997 budget estimates were submitted to OMB as part of the President’s fiscal year 1999 budget—and, therefore, these adjustments could not have been reflected in those budget estimates. Further, we reviewed the December 1998 letter the IG submitted to the House Majority Leader and the Chairman, House Committee on Government Reform and Oversight, in response to their request that the IG update its assessment of the most significant challenges facing the Department of Education. The IG’s letter identified improving the data integrity of Education’s information management systems, including NSLDS, as one of Education’s most significant management challenges. The report stated that the “Student Financial Assistance loan programs contain inaccurate and incomplete data.” Specifically, the IG reported that the September 1998 audit of NSLDS found that about 3.7 million loan records totaling $10.7 billion had not been updated with lender-provided loan status and principal and interest balance data. Until Education corrects its inaccurate loan data and successfully completes a validation of NSLDS, any loan cost estimates prepared based on NSLDS will continue to be questionable. (Education’s comments are reprinted in appendix IV.) At the end of fiscal year 1997, HUD was unable to provide adequate supporting data for its financial statement credit subsidy estimates. This lack of supporting data also calls into question the quality of HUD’s budget submission related to its credit subsidy estimates. Since then, HUD, with the assistance of independent contractors, has focused significant effort on this area and has made considerable progress towards developing the supporting data necessary to reasonably estimate loan program costs, including those for the two programs we reviewed. These revised data are currently undergoing an audit, which, once completed, will help determine the reliability of the data and, thus, the reasonableness of HUD’s loan cost estimates. Most of HUD’s loan guarantees are made by the Federal Housing Administration (FHA) which, as a government corporation, follows private sector generally accepted accounting principles (GAAP). FHA received an unqualified audit opinion on its fiscal year 1997 financial statements prepared in accordance with GAAP. However, in order to consolidate FHA’s financial results into HUD, credit program cost information must be converted to federal accounting standards. HUD has had difficulty making this conversion due to the differences in the two accounting approaches for credit programs. Under SFFAS No. 2, when estimating the liability and related expense for future defaults on guaranteed loans, FHA must estimate, for the life of the loans, all cash disbursements related to the loan guarantee and the associated collateral (for example, payment of default claims and costs to dispose of foreclosed property) as well as all cash receipts (for example, loan guarantee premiums and proceeds from the sale of foreclosed properties). GAAP considers most of the same receipts and disbursements but does not include loan guarantee premiums (a significant cash receipt for FHA) when calculating the same liability and related expense. Under GAAP, the loan guarantee premiums are generally reported as revenues. Further, calculating the present value of receipts and disbursements is not required under GAAP. Because of the different methods of calculating this liability and related expense, the GAAP-based amount would be significantly different from what would be calculated under SFFAS No. 2. For fiscal year 1997, FHA was unable to prepare financial statements that complied with the requirements of SFFAS No. 2 in time to be audited and included in HUD’s consolidated financial statements. As a result, the auditors issued a qualified opinion on HUD’s fiscal year 1997 financial statements. However, an independent public accounting firm is currently auditing FHA’s fiscal year 1997 balance sheet prepared in accordance with SFFAS No. 2 as part of its audit of the opening balances for the fiscal year 1998 financial statement audit. While HUD was not recording the cost of its loan guarantee programs on its financial statements in accordance with the requirements of SFFAS No. 2, it was estimating the future cash flows of its loan guarantee programs for budget purposes. In the spring of 1998, we evaluated the cash flow models used to develop the fiscal year 1997 budget for the two programs we reviewed and identified numerous problems, such as formula errors and inconsistent calculations of cash flow assumptions. We also determined that these cash flow models were not documented and the Mutual Mortgage Insurance (MMI) Fund model required extensive manual data entry, which increased the likelihood of errors. Additionally, HUD was unable to provide supporting data for many of the cash flow assumptions in the models. These problems with the cash flow models and the supporting data raise concerns over the reliability of HUD’s fiscal year 1997 budget submission for its credit subsidy estimates. In the summer of 1998, HUD, with the assistance of independent contractors, focused significant effort on correcting the errors in these models. These contractors assisted in gathering and developing sufficient, relevant, and readily available supporting data as a basis for the estimates of loan program cost estimates; performed extensive detailed analyses of these cash flow models; identified additional errors; and revised the models. The contractors also helped HUD implement other effective cost estimation practices. For example, the contractors compared cash flow models to program requirements and determined that these revised models accurately captured all significant aspects of the program, such as loan origination fees and rebates of premiums when loans are repaid early. In addition, the contractors assisted HUD in documenting these cash flow models, including sources of data and the mechanics of the model. HUD, with the assistance of its contractors, also followed other effective cost estimation practices. For example, HUD recently performed sensitivity analyses for its credit programs, including the two we reviewed, to identify key cash flow assumptions. And, by working together, accounting, budget, and program staff focused their efforts on gathering and documenting the basis for the assumptions that had the greatest impact on HUD’s credit subsidy estimates. HUD determined that an independent actuarial review provided the basis for three of the six key cash flow assumptions for the MMI model—the primary single family guaranteed loan program. For the remaining key cash flow assumptions for this program, HUD determined and documented that the basis for estimating future loan performance was historical experience from the accounting system. While HUD has generally improved its estimation process for the two programs we reviewed, other improvements could be made. For example, comparing its estimates of future loan performance to actual cash flows recorded in the accounting system would enable HUD to determine whether these estimates reasonably predicted future loan performance. In making this comparison, we found that the average claim amount used in the 1997 budget submission was consistent with historical experience for the MMI Fund. However, when the contractors were updating HUD’s fiscal year 1997 cash flow models for financial reporting purposes, they misinterpreted a report and used it to calculate an estimated average claim amount for the MMI Fund that was significantly less than the actual amount recorded in the accounting system. When we informed HUD of the error in the revised cash flow model, HUD changed the average claim amount to be consistent with actual costs recorded in the accounting system. As a result, the estimated program cost recorded in the draft financial statements increased $1.3 billion. If HUD had compared the estimated future loan performance used in the models to actual costs recorded in the accounting system, it would have detected this error in the average claim amount. Also, for the two programs we reviewed, HUD did not prepare timely credit subsidy reestimates for budgetary and financial statement purposes. HUD obtained permission from OMB to routinely prepare budget reestimates in the summer following the reporting year. However, SFFAS No. 2 requires annual reestimates each year as of the date of the financial statements if the reestimate would significantly affect the amounts presented. According to the Director of HUD’s Housing Budget Office, actual data from the accounting system were not available in time to prepare reestimates in the fall—the same time that the staff were formulating the annual budget. HUD management has refined its reestimate approach which should have allowed for timely reestimates to be included in the current year’s budget and financial statements. Because HUD received a qualified opinion on its fiscal year 1997 financial statements, it was required by OMB to prepare an action plan to address identified financial management issues related to the loan program cost estimation process. This plan included accumulating supporting data for estimating the cost of its loan programs and reviewing its cash flow models to identify additional improvements that could reduce the chance of error. Further, the plan included routinely reestimating the cost of its loan programs timely and including the reestimates in both the current budget cycle and the current year’s financial statements. Additionally, the plan included establishing formal policies and procedures that include a formal supervisory review process. The plan also provides for performing comparisons of estimated to actual loan performance. This plan, if fully implemented, should help HUD prepare reasonable estimates of loan program costs. In its audit report on the fiscal year 1997 financial statements, the IG included a recommendation, with which we concur, that HUD develop and implement a plan to prepare the FHA data needed to meet SFFAS No. 2 requirements. As previously discussed, HUD has taken steps to address most of the problems related to reasonably estimating the cost of its loan programs. To help ensure that HUD is able to reasonably estimate the cost of its loan programs, we recommend that the Secretary of Housing and Urban Development or his designee take the following actions: Complete efforts to work with independent contractors to accumulate sufficient, relevant, and reliable data to estimate the cost of credit programs. Implement plans to compare estimated cash flows to actual cash flow experience to validate the quality of the estimates as part of the annual reestimation process. Implement its revised reestimate approach that will result in timely credit subsidy reestimates for both financial statements and budget submissions. Implement existing plans to develop written policies and procedures including a formal supervisory review process for estimating the cost of credit programs. HUD did not take exception to the findings discussed in this report and agreed with and stated it plans to implement our recommendations. (HUD’s comments are reprinted in appendix V.) During fiscal year 1997, VA had serious problems performing basic accounting for its loan programs and, therefore, did not have a reliable basis for the loan program cost estimates included in its financial statements. These problems contributed to the qualified audit opinion on VA’s fiscal year 1997 financial statements. They also raise doubts about the reliability of loan program cost information submitted to OMB for budgetary purposes. Further, we found that VA did not record its guarantee obligations on the loans it sold. These weaknesses not only affect VA’s ability to make reasonable cost estimates, but also call into question its ability to effectively manage and monitor its vendee loan program. During fiscal year 1997, VA transferred the management of its direct loan portfolio to an outside servicer. VA hoped that the transfer would reduce the number of staff resources needed and resolve existing internal control weaknesses and obsolescence issues related to its computer system. However, the data transferred to the servicer, which up to that point had been maintained by more than 40 VA regional offices, were incomplete and inconsistent and immediately created loan servicing problems. Further, VA closed down its own loan servicing system without putting in place procedures designed to ensure that it maintained accountability over the loan portfolio. These procedures should have included maintaining an inventory of the loans in the portfolio and a loan origination database to be used in conjunction with the servicer’s system. It also included other procedures for monitoring the amount and timing of cash due from borrowers. As a result, VA management did not know the number or amount of direct loans outstanding at year-end, which VA estimated to be at least $2.1 billion, or whether the amount of cash received from the servicer during the year was correct. Further, because the servicer did not have an accurate inventory, the servicer was, according to VA, unable to allocate over $3 million in payments received after the transfer and, therefore, did not have correct payment histories for the affected loans. Without this basic information, VA was unable to reliably track the performance of its existing loans or reasonably estimate the future performance of its loans or the cost of its credit programs. VA’s loan accounting problems were further exacerbated by its improper treatment of loans sold to investors with a guarantee of prompt payment of future principal and interest. During fiscal year 1997, VA sold about $1 billion in loans and, since 1992, has sold approximately $9 billion in loans. Because VA guaranteed future principal and interest payments on the sold loans, it is responsible for future losses resulting from such occurrences as delinquencies and defaults of the underlying loans. According to SFFAS No. 2, future losses should have been estimated and a subsidy expense and related liability should have been established for future defaults or delinquent payments when the loans were sold. Prior to fiscal year 1997, VA did not record the subsidy expense or the liability for potential future defaults on the loans it sold. Once we identified this error, VA, in consultation with OMB, estimated an additional expense as part of an aggregate adjustment for future losses and related liability for the loan sales not recorded between 1992 and 1997. Because the adjustment was aggregated in the financial statements with other adjustments related to direct loans, we were unable to determine what portion of this $376 million estimate was related directly to the loan sales activity, therefore, we did not attempt to determine the reasonableness of the adjustment. However, because of the lack of critical financial data, VA’s ability to reasonably estimate the cost of its guarantee obligations related to loans sold is severely hampered. In order to further refine this estimate, VA recently hired an outside contractor to reconstruct the historical data on prior loan sales and develop a model to estimate the cost of loans sold with a guarantee. In addition, the contractor plans to assess VA’s current cash flow models for direct and guaranteed loan programs to determine whether the assumptions are appropriate. The problems VA had in accounting for its loans also hindered the agency’s ability to implement effective cost estimation practices. For example, because VA lacked complete data about the inventory of loans in its portfolio, it did not have a reasonable basis for estimates of future loan performance and could not compare estimated loan performance to actual costs recorded in the accounting system. VA did not use certain other estimation practices that would have improved its cost estimation process. For example, VA did not perform sensitivity analyses, but instead it relied on program managers’ opinions in order to identify those assumptions that had the greatest impact on the programs’ cost. As part of our assignment, we performed sensitivity analyses and verified that program managers’ opinions correctly identified the key cash flow assumptions. However, for new programs and changes in current program design or delivery, sensitivity analyses would help ensure that key cash flow assumptions are appropriately identified. Also, VA did not have either written policies and procedures for estimating loan program costs or a formal review process that included representatives from the program, budget, and accounting offices. While VA did implement a number of effective cost estimation practices—including calculating timely reestimates, comparing cash flow models to program requirements, and having organized, documented cash flow models—the combined impact of VA’s serious basic accounting weaknesses hindered its ability to make reasonable cost estimates. As required by OMB, VA has prepared an action plan to address its financial management problems. The action plan focuses on resolving the fundamental problems in accounting for the loan sales program and the basic data integrity issues related to the incomplete inventory of loans currently maintained by the servicer. Until these basic accounting deficiencies are resolved, VA will continue to have difficulty making reasonable estimates of its loan program costs. Additionally, once VA’s basic accounting problems are resolved, management can turn its attention to implementing practices that will further improve its ability to make reasonable loan program cost estimates. In its audit report on the fiscal year 1997 financial statements, the IG included a recommendation, with which we concur, that VA complete actions underway to ensure that all direct loan records are complete and accurate. Once VA’s basic accounting issues are resolved, in order to correct the deficiencies that we identified in VA’s direct and guaranteed loan program cost estimation processes, we recommend that the Secretary of Veterans Affairs or his designee implement the following cost estimation practices: Compare estimated cash flows to actual cash flow experience to validate the quality of the estimates as part of the annual reestimation process. Develop and implement written policies and procedures that include a formal supervisory review process and a coordinated approach between program, budget, and accounting staff for estimating the cost of credit programs. Use sensitivity analysis as a tool to identify key cash flow assumptions. Continue efforts, with the assistance of contractors, to create and use a model and develop the necessary data to calculate the liability for the guarantee on sold loans and record the related liability in the financial statements. In commenting on our draft report, VA concurred with our recommendations and agreed to implement them as part of its current efforts to correct direct loan records. However, VA did not agree with how we characterized the magnitude of the problems it encountered when the agency outsourced its loan portfolio. Generally, VA asserted that the problems discussed in this report were limited to a small portion of its overall credit programs and should not be considered material. We disagree. The problems with VA’s loans receivable and the liability for loan guarantees were so pervasive that the IG qualified its audit opinion on VA’s fiscal year 1997 financial statements. As stated in its report, the IG was unable to attest to the accuracy of the loans receivable balance “because of incomplete records and the poor quality of the direct loan portfolio records.” The IG further reported that VA’s reported $2.1 billion net credit program receivables balance was inaccurate because VA’s accounting procedures were not being consistently followed and/or internal controls were not operating effectively. The IG’s report also stated that because of VA’s “inadequate records, there were numerous errors in direct loan and associated escrow account balances and payment of taxes and insurance, significant delays in establishing new loans in the accounting records and processing borrowers’ loan payments, and inconclusive general ledger account balances.” Specifically, with regard to the materiality of the transferred loans, these loans were $1.2 billion, or 57 percent of the reported $2.1 billion of net credit program receivables. Further, we found that VA sold $9 billion of direct loans between 1992 and 1997 without initially recording the cost of its guarantee obligations. We consider each of these amounts to be significant. (VA’s comments are reprinted in appendix VI.) For fiscal year 1997, USDA was unable to make reasonable cost estimates for its loan programs because it did not maintain the historical data needed to predict future loan performance and used computer systems that were not appropriately configured to capture the data necessary to make such estimates. These long-standing problems contributed to the auditor’s inability to give an opinion on USDA’s fiscal year 1997 consolidated financial statements and raised questions about the quality of the budget data related to USDA’s loan programs. For the two programs we reviewed, USDA performed sensitivity analyses and identified which assumptions had the greatest impact on the credit subsidy. However, because it lacked adequate historical data, USDA based its prediction of key assumptions, such as the amount and timing of defaults and prepayments, primarily on the opinion of program managers. These managers estimated future loan performance based on their programmatic knowledge and experience without the assistance of extensive historical loan performance data and sophisticated computer modeling. Program management opinion may be an acceptable source of support for estimates when a new, unique program is established or when significant changes have been made to existing programs. However, program management opinion should be used only as an interim method and does not provide a reliable basis for established programs.Additionally, when program manager opinion is used, it should subsequently be compared to actual cash flow data from the accounting system to corroborate the reasonableness of management’s judgment. The lack of historical data for the two programs we reviewed was largely the result of system inadequacies. For example, prior to the implementation of FCRA, USDA’s systems did not track certain key cash flow data. In addition, although USDA’s current systems were capable of capturing some key cash flow data at the detail level, these systems could not summarize the data so that they were readily usable for calculating credit subsidy estimates. For example, USDA’s current systems were incapable of accumulating summary level prepayment information because the systems could not distinguish between borrowers that were completely paying off loans and borrowers that were paying an extra amount each month. USDA’s accounting system also did not contain the loan origination date for loans that were modified when borrowers experienced financial hardship and were unable to meet scheduled payments. As a result, the number and amount of delinquent loans in the accounting system could not be broken out by loan origination year, and USDA was unable to track individual loans through their entire history without extensive manual intervention. USDA also lacked adequate historical data to estimate the amount of interest subsidy borrowers would receive in the future. The Single Family Housing Loan Program makes low interest rate loans to low income families who lack adequate housing and cannot obtain credit from other sources. For this program, the amount of interest that USDA subsidizes is based on borrowers’ income. As the borrowers’ income increases or decreases, USDA pays more or less interest subsidy on the loans. Ultimately, when the borrowers are capable of paying market interest rates for a housing loan, they “graduate” from the program because they no longer qualify for the program. However, USDA did not maintain adequate records of the number of borrowers who moved to higher income levels and when borrowers were eligible to graduate from the program. Another factor affecting the reasonableness of USDA’s estimates of loan program costs is the timing of reestimates that should be made to incorporate actual loan performance and other new information. In order to reasonably estimate and report the cost of loan programs, USDA must reestimate its credit subsidies on time and include these reestimates in the current year’s financial statements and budget submission. However, USDA management told us that the agency lacked sufficient staff to make prompt reestimates for the programs we reviewed because these estimates needed to be calculated in the fall at the same time the budget was prepared. USDA received OMB’s permission to calculate budgetary reestimates in the summer following the financial statement reporting year. However, this authorization to delay budgetary reestimates did not allow USDA to delay the financial statement reestimates. Further, USDA did not calculate the fiscal year 1997 reestimate for the Single Family Direct Loan Program until the fall of 1998—after the date agreed upon with OMB. As a result, the agency did not update its fiscal year 1997 estimate of loan program costs until nearly 3 years after the original estimate was prepared in 1995. Until USDA calculates timely reestimates and includes them in its financial statements or clearly demonstrates that the reestimates would not be material to the financial statements, the amount of loans, liability for loan guarantees, and cost of the credit programs may be materially misstated on the financial statements. Delaying the reestimates also affects the quality of loan performance and cost data that are provided to the Congress for budgetary considerations. Also, for the two programs we reviewed, USDA did not use other practices that would enhance its ability to reasonably estimate the cost of loan programs. For example, USDA did not routinely compare estimated loan performance to actual costs recorded in the accounting system to assess how closely the estimate compared with subsequent actual costs. It also did not routinely compare cash flow models to program requirements. These comparisons would have enabled USDA to identify and research significant differences and determine whether assumptions related to expected future loan performance needed to be revised. In addition, during fiscal year 1997, USDA did not have formal policies and procedures for calculating estimates of loan program costs or for a formal review process that included representatives from the program, budget, and accounting offices to help ensure continuity and accuracy during this complicated estimation process. USDA has developed an action plan to address deficiencies in estimating the cost of its loan programs. This plan includes aggressive time frames and directs budget and accounting staff to prepare reasonable and timely estimates of loan program costs and to assemble the most accurate and reliable data available for each credit program. The plan also includes implementation of a number of practices that will improve USDA’s estimation process, including revising cash flow models and comparing estimated loan performance to actual costs recorded in the accounting system. Additionally, the plan calls for a task force comprised of representatives from budget, program, accounting, and the IG offices to ensure that preparing the loan program cost estimates is a coordinated effort. Further, the plan calls for, and USDA is developing, formal policies and procedures for calculating estimates of loan program costs, including a formal review process by representatives from the program, budget, and accounting offices. Finally, the plan includes documenting the basis for the assumptions used to estimate program costs and identifying additional sources of data that may be used to reasonably estimate future loan performance. The USDA IG told us that outside contractors may be needed to successfully implement the agency’s action plan. Other agencies have successfully used outside contractors to assist with gathering and developing a reliable basis for their estimates of loan program costs and improving their cash flow models. While implementation of the current plan will improve USDA’s ability to prepare reasonable estimates of loan program costs, the plan does not currently address how USDA will implement the necessary computer system enhancements to address such problems as providing complete and accurate prepayment and delinquent loan information. Until these computer system enhancements are made, USDA will continue to have great difficulty making reasonable estimates of loan program costs based on reliable historical data. Further, until written policies and procedures that include a formal supervisory review process are developed and fully implemented, USDA will lack important controls to help ensure that errors are detected and corrected promptly. In its May 1998 audit report, the USDA IG recommended, and we concur, that the agency develop sufficient, relevant, and reliable data to support its estimates of loan program costs. USDA has recognized the need to develop and better document its basis for the credit subsidy estimates and, as described above, has developed an action plan to address this problem. We also recommend that the Secretary of Agriculture or his designee take the following actions: Implement the action plan to address deficiencies in estimating the cost of loan programs in a timely manner, including comparing estimated cash flows to actual cash flow experience to validate the quality of the estimates as part of the annual reestimation process, reestimating loan program costs timely and including them in the current year’s financial statements and budget submissions, and developing and implementing written policies and procedures that include a formal supervisory review process and a coordinated approach between program, budget, and accounting staff for estimating the cost of credit programs. Ensure that the key cash flow assumptions in existing cash flow models are documented, including comparisons to program requirements. Ensure that once all mission-critical systems are Year 2000 compliant, computer systems are updated to capture the data necessary to reasonably estimate loan program costs. Consider hiring outside contractors to assist in gathering sufficient, relevant, and reliable data as a basis for credit program estimates. We received comments from both the Rural Development (RD) and Farm Service Agency (FSA) components of USDA because these two components operate the programs included in our review. In general, RD and FSA did not take exception to either the findings or most of the recommendations presented in this report. However, FSA stated that it would not be in its best interest to use outside contractors to assist in gathering sufficient, relevant, and reliable data as a basis for credit program estimates as we recommended. According to FSA, it has had success working with the National Agricultural Statistical Service (NASS) and believes that NASS can accomplish the same tasks we recommended, potentially at significantly lower costs. We agree that NASS has assisted FSA in gathering cash flow data. However, based on the amount of progress that other agencies have experienced in a short period of time with the assistance of independent contractors, we believe that FSA may also benefit from this type of contractor support and should explore this option as well. Other comments received from RD and FSA focused primarily on the levels of historical data needed to make reasonable credit subsidy estimates and the steps these components have taken to address some of the challenges they face when making these estimates. RD and FSA also provided clarification on various points in our report, which we have incorporated as appropriate. (USDA’s comments are reprinted in appendix VII.) Another factor that could significantly affect the five key credit agencies’ ability to make reasonable credit subsidy estimates in the future is the Year 2000 problem. The Year 2000 problem is rooted in the way dates are recorded and computed in many computer systems. For the past several decades, systems have typically used two digits to represent the year—such as “98” for 1998—to save electronic data storage space and reduce operating costs. With this two-digit format, however, the Year 2000 is indistinguishable from 1900, 2001 from 1901, and so on. As a result of this ambiguity, system or application programs that use dates to perform calculations may generate incorrect results when working with years after 1999. As an example of the potential impact, a veteran born in 1925 and therefore turning 75 in 2000 could be incorrectly computed as being negative 25 years old (if “now” is 1900)—not even born yet—and hence ineligible for benefits that the veteran had been receiving, such as a mortgage guarantee. Addressing the Year 2000 problem is a major challenge for the five key credit agencies, all of which rely on computers to process and update records. Unless the systems that compile loan program information are Year 2000 compliant, the five key credit agencies may face serious problems at the turn of the century. Systems used to track loans could (1) produce erroneous information on loan status, such as indicating that an unpaid loan had been satisfied or (2) incorrectly calculate interest and amortization schedules. Loan origination, default, repayment schedule, prepayment, and premium receipts are all linked to dates. To assist in the credit subsidy estimation process, this date-related information must be retained for extended periods and used to project future cash flows for the credit agencies’ loan programs. Therefore, computer systems that support the five key credit agencies’ various loan programs are susceptible to the Year 2000 problem. To avoid widespread system failures, the five key credit agencies have been fixing, replacing, or eliminating Year 2000 noncompliant systems. All of the systems that provide key cash flow data for the 10 loan programs we reviewed have been identified as mission critical. According to the agencies, these mission-critical systems that support the loan cost estimation process are either currently Year 2000 compliant or are scheduled to meet the OMB goal to be compliant by March 31, 1999. However, in its Quarterly Report: Progress on Year 2000 Conversion, as of mid-November 1998, OMB expressed concerns related to progress on Year 2000 conversion at Education and USDA. OMB noted that HUD and VA appear to be making satisfactory progress towards being Year 2000 compliant and pointed out that SBA was the first agency to report that all of its mission-critical systems were Year 2000 compliant. In previous reports and testimonies we have raised issues over the status of Year 2000 conversion efforts at the five key credit agencies, except for SBA. (See the list of GAO products related to Year 2000 efforts at the end of this report.) To fully address Year 2000 risks that the five key credit agencies face, data exchange environment problems must also be addressed—a monumental issue. As computers play an ever-increasing role in our society, exchanging data electronically has become a common method of transferring information between federal agencies and private sector organizations. For example, Education’s student financial aid data exchange environment is massive and complex. It includes about 7,500 schools, 6,500 lenders, and 36 guaranty agencies, as well as other federal agencies. All five key credit agencies depend on electronic data exchanges with external business partners to execute their lending programs. As computer systems are converted to process Year 2000 dates, the associated data exchange environment must also be made Year 2000 compliant. If the data exchange environment is not Year 2000 compliant, data exchanges may fail or invalid data could cause the receiving computer systems to malfunction or produce inaccurate computations. All five key credit agencies are working on plans to address data exchange issues with external business partners. Because of these risks, the five key credit agencies must have business continuity and contingency plans to reduce the risk of Year 2000 business failures. Specifically, the five key credit agencies must ensure the continuity of their core business processes and lending operations by identifying, assessing, managing, and mitigating their Year 2000 risks. These efforts should not be limited to the risks posed by Year 2000-induced failures of internal information systems but must include the potential Year 2000 failures of others, including external business partners. The business continuity planning process focuses on reducing the risk of Year 2000-induced business failures. It safeguards an agency’s ability to produce a minimum acceptable level of outputs and services in the event of failures of internal or external mission-critical systems. It also helps identify alternate resources and processes needed to operate the agency core business processes. While it does not offer a long-term solution to Year 2000-induced failures, it will help an agency to prepare for potential problems and may facilitate the restoration of normal service at the earliest possible time in the most cost-effective manner. All of the five key credit agencies have begun business continuity and contingency planning. We are sending copies of this report to the Ranking Minority Member of the House Committee on the Budget. We are also sending copies to the Director, Office of Management and Budget; the Secretaries of Agriculture, Education, Housing and Urban Development, and Veterans Affairs; the Administrator of Small Business; and interested congressional committees. Copies also will be made available to others upon request. Please contact me at (202) 512-9508 if you or your staffs have any questions concerning this report. Major contributors to this report are listed in appendix VIII. The Federal Credit Reform Act of 1990 (FCRA) was enacted to require agencies to more accurately measure the government’s cost of federal loan programs and to permit better cost comparisons both among credit programs and between credit and noncredit programs. FCRA assigned to OMB the responsibility to coordinate the cost estimates required by the act. OMB is authorized to delegate to lending agencies the authority to estimate costs, based on written guidelines issued by OMB. These guidelines are contained in sections 33.1 through 33.12 of OMB Circular No. A-11, and supporting exhibits. The Federal Accounting Standards Advisory Board (FASAB) developed the accounting standard for credit programs, SFFAS No. 2, Accounting for Direct Loans and Loan Guarantees, which became effective with fiscal year 1994. This standard, which generally mirrors FCRA, established guidance for estimating the cost of direct and guaranteed loan programs, as well as recording direct loans and the liability for loan guarantees for financial reporting purposes. The actual and expected costs of federal credit programs should be fully recognized in both budgetary and financial reporting. To determine the expected cost of a credit program, agencies are required to predict or estimate the future performance of the program. This cost, known as the subsidy cost, is the present value of disbursements—over the life of the loan—by the government (loan disbursements and other payments) minus estimated payments to the government (repayments of principal, payments of interest, other recoveries, and other payments). For loan guarantees, the subsidy cost is the present value of cash flows from estimated payments by the government (for defaults and delinquencies, interest rate subsidies, and other payments) minus estimated payments to the government (for loan origination and other fees, penalties, and recoveries). To estimate the cost of loan programs, agencies first estimate the future performance of direct and guaranteed loans when preparing their annual budgets. The data used for these budgetary estimates should be reestimated to reflect any changes in loan performance since the budget was prepared. This reestimated data is then used in financial reporting when calculating the allowance for subsidy (the cost of direct loans), the liability for loan guarantees, and the cost of the program. In the financial statements, the actual and expected cost of loans disbursed as part of a credit program is recorded as a “Program Cost” on the agencies’ Statement of Net Costs for loans disbursed. In addition to recording the cost of a credit program, SFFAS No. 2 requires agencies to record direct loans on the balance sheet as assets at the present value of their estimated net cash inflows. The difference between the outstanding principal balance of the loans and the present value of their net cash inflows is recognized as a subsidy cost allowance—generally the cost of the direct loan program. For guaranteed loans, the present value of the estimated net cash outflows, such as defaults and recoveries, is recognized as a liability and generally equals the cost of the loan guarantee program. In preparing SFFAS No. 2, FASAB indicated that the subsidy cost components—interest, defaults, fees, and other cash flows—would be valuable for making credit policy decisions, monitoring portfolio quality, and improving credit performance. Thus, agencies are required to recognize, and disclose in the financial statement footnotes, the four components of the credit subsidy—interest, net defaults, fees and other collections, and other subsidy costs—separately for the fiscal year during which direct or guaranteed loans are disbursed. FASAB is currently considering revising these standards. In addition, nonauthoritative guidance is contained in the previously discussed Technical Release of the Credit Reform Task Force of the Accounting and Auditing Policy Committee, entitled Preparing and Auditing Direct Loan and Loan Guarantee Subsidies Under the Federal Credit Reform Act. This Technical Release provides detailed implementation guidance for agency staff on how to prepare reasonable credit subsidies. Further, the Technical Release provides suggested procedures for auditing credit subsidy estimates. Agency management is responsible for accumulating relevant, sufficient, and reliable data on which to base the estimates. Further, SFFAS No. 2 states that each credit program should use a systematic methodology to project expected cash flows into the future. To accomplish this task, agencies should develop cash flow models. A cash flow model is a computer-based spreadsheet that generally uses historical information and various assumptions including defaults, prepayments, recoveries, and the timing of these events to estimate future loan performance. These cash flow models, which should be based on sound economic, financial, and statistical theory, identify key factors that affect loan repayment performance. Agencies use this information to make more informed predictions of future credit performance. The August 1994 User’s Guide To Version r.8 of the OMB Credit Subsidy Model provides general guidance on creating cash flow models to estimate future delinquencies, defaults, recoveries, etc. This user’s guide states that “In every case, the agency or budget examiner must maintain current and complete documentation and justification for the estimation methods and assumptions used in determining the cash flow figures used for the OMB Subsidy Model” to calculate the credit subsidy. According to SFFAS No. 2, to estimate the cost of loan programs and predict the future performance of credit programs, agencies should establish and use reliable records of historical credit performance. Since actual historical experience is a primary factor upon which estimates of credit performance are based, agencies should maintain a database, also known as an information store, at the individual loan level, of historical information on all key cash flow assumptions, such as defaults or recoveries, used in calculating the credit subsidy cost. Additional nonauthoritative guidance on cash flow models may be found in the Model Credit Program Methods and Documentation for Estimating Subsidy Rates and the Model Information Store issue paper prepared by the Credit Reform Task Force of the Accounting and Auditing Policy Committee. The draft “Information Store” Task Force paper provides guidance on the type of historical information agencies need to reasonably estimate the cost of credit programs. The information store should provide three types of information. First, the information store should maintain key loan characteristics at the individual loan level, such as the loan terms and conditions. Second, it should track economic data that influence loan performance, such as property values for housing loans. Third, an information store should track historical cash flows on a loan-by-loan basis. The data elements in an information store should be selected to allow for more in-depth analyses of the most significant subsidy estimate assumptions. In addition to using historical databases and the cash flow models, other relevant factors must be considered by agencies to estimate future loan performance. These relevant factors include economic conditions that may affect the performance of the loans, financial and other relevant characteristics of borrowers, the value of the collateral to loan balance, changes in recoverable value of collateral, and newly developed events that would affect loan performance. Agencies prepare estimates of loan program costs as a part of their budget requests. Later, after the end of the fiscal year, agencies are required to update or “reestimate” loan costs for differences among estimated loan performance and related cost, the actual program costs recorded in the accounting records, and expected changes in future economic performance. The reestimate should include all aspects of the original cost estimate including prepayments, defaults, delinquencies, recoveries, and interest. Reestimates of the credit subsidy allow agency management to compare the original budget estimates with actual program results to identify variances from the original estimate, assess the quality of the original estimate, and adjust future program estimates as appropriate. Any increase or decrease in the estimated cost of the loan program is recognized as a subsidy expense or a reduction in subsidy expense for both budgetary and financial statement purposes. The reestimate requirements for interest rate and technical assumptions (defaults, recoveries, prepayments, fees, and other cash flows) differ. For budget purposes, OMB Circular A-11 states that agencies must reestimate the interest portion of the estimate when 90 percent of the direct or guaranteed loans are disbursed. The technical reestimate, for budgetary purposes, generally must be done annually, at the beginning of every year as long as the loans are outstanding, unless a different plan is approved by OMB, regardless of financial statement significance. For financial statement reporting purposes, both technical and interest rate reestimates are required annually, at the end of the fiscal year, whenever the reestimated amount is significant to the financial statements. If there is no significant change in the interest portion of the estimate prior to the loans being 90 percent disbursed, then the interest reestimate may be done at least once when the loans are 90 percent disbursed. Our objectives were to assess (1) the ability of agencies’ to reasonably estimate the cost of their loan programs, including whether they used practices identified by the Credit Reform Task Force as being effective in making these estimates and (2) the status of agencies’ efforts to ensure that computer systems used to estimate the cost of credit programs are Year 2000 compliant. We selected a sample of 10 programs—5 direct loan programs totaling $52.1 billion and 5 guaranteed loan programs totaling $558.1 billion—from the five agencies with the largest domestic federal credit programs: the Small Business Administration, and the Departments of Education, Housing and Urban Development, Veterans Affairs, and Agriculture. We generally selected programs that had the most credit outstanding or highest loan levels at each agency. Specifically, these programs were: 7(a) General Business Loans Program and Disaster Loan Program, which totaled 72 percent of SBA’s loan guarantees and 73 percent of its direct loans, respectively. 7(a) General Business Loans Program guarantees loans made to small businesses that are unable to obtain financing in the private credit market but can demonstrate the ability to repay the loan. Disaster loans are made to homeowners, renters, businesses of all sizes, and nonprofit organizations that have suffered uninsured physical property loss as a result of a disaster in an area declared eligible for assistance by the President or SBA. Federal Family Education Loan Program and William D. Ford Direct Loan Program, which totaled 100 percent of Education’s loan guarantees and 50 percent of its total loans receivable, respectively. These two programs help pay for educational expenses incurred by vocational, undergraduate, and graduate students enrolled at eligible postsecondary institutions. The guaranteed loans are made by private lenders, insured by a state or private nonprofit guaranty agency, and reinsured by the federal government, whereas the direct loans are made directly from the federal government to the students. Mutual Mortgage Insurance Fund and the General and Special Risk Insurance Fund Section 223(f) Refinance, which totaled 81 percent of HUD’s loan guarantees. The Mutual Mortgage Insurance Fund helps people become homeowners by providing insurance to lenders that finance the purchase of one-to-four family housing that is proposed, under construction, or existing, or lenders that refinance indebtedness on existing housing. The Special Risk Insurance Fund Section 223 (f) Refinance insures lenders against loss on the purchase or refinance of existing multifamily housing projects. Guaranty and Indemnity Fund and the Loan Guaranty Direct Loan Program, which totaled 100 percent of VA’s post credit reform loan guarantees and 69 percent of its total loans receivable, respectively. The Guaranty and Indemnity Fund assists veterans and certain others in obtaining credit for the purchase, construction, or improvement of homes on more favorable terms than are generally available to nonveterans. The Loan Guaranty Direct Loan Program makes home loans on favorable terms to members of the general public—both veterans and nonveterans— purchasing a VA-owned property. Farm Service Agency Farm Operating Loans Program and the Rural Housing Service Single Family Housing Program, which totaled 20 percent of USDA’s direct loans. Farm Service Agency, Farm Operating Loans are made to family farmers who are unable to obtain credit from private and cooperative sources and are intended to help provide farmers with the opportunity to conduct successful farm operations. The Rural Housing Service, Single Family Housing Loans are made to very low- and low-income families who are without adequate housing and cannot obtain credit from other sources and may be used to build, purchase, repair, or refinance homes in rural areas. To gain an understanding of the credit programs and the agencies’ credit subsidy estimation process, we obtained and reviewed the fiscal years 1996 and where available 1997 financial statement audit work papers. During this review, we focused primarily on the auditor’s review of the loans receivable and liability for loan guarantees line items on the balance sheet as well as the audit of the credit subsidy cost on the statement of operations. In addition, at VA, SBA, and HUD, we directly participated in the fiscal year 1997 financial statement audits as part of the federal government’s first consolidated financial statement audit. To assess the reasonableness of agencies’ credit subsidy estimation processes, we first performed sensitivity analyses of the SBA, HUD, and VA cash flow models to identify the key cash flow assumptions, which are those assumptions having the greatest impact on the credit subsidy. We used the sensitivity analyses performed by USDA and Education. To perform the sensitivity analyses, we obtained copies of the agencies’ cash flow models and performed an extensive search to identify each root cash flow assumption in the agencies’ cash flow model. Once identified, each root cash flow assumption was adjusted, both up and down, by a fixed proportion. We followed the guidance in the Credit Reform Task Force’s Technical Release Preparing and Auditing Direct Loan and Loan Guarantee Subsidies Under the Federal Credit Reform Act and adjusted each root cash flow assumption by 10 percent. To determine which root assumption had the greatest impact on the credit subsidy, we used the adjusted cash flows as input into OMB’s credit subsidy model to recalculate the subsidy. For the recovery assumptions—generally the estimated amount agencies receive from selling collateral net of cash outflows for managing, maintaining, and selling foreclosed properties—we adjusted the recovery assumption along with the default timing assumption to ensure that recoveries occurred after the defaults. Once we identified the key cash flow assumptions, we used the guidance in Statement on Auditing Standard No. 57, Auditing Accounting Estimates, as well as the Technical Release to determine whether agencies had a reliable basis—whether the agencies had gathered sufficient, relevant, and reliable supporting data—for the estimates of loan program cost and for their estimates of loan program performance. Because of VA’s serious problems performing basic accounting for its loan programs, we determined that it would not be meaningful to further assess whether VA had sufficient, relevant, and reliable supporting data for its estimates of loan program costs. When possible, we used the work of the agencies’ fiscal year 1997 financial statement auditors to determine whether agencies had a reliable basis for their estimates of loan program costs. We also compared program descriptions with agencies’ cash flow models to determine whether all characteristics of the program were appropriately modeled. Further, we compared estimated loan program performance to actual loan program performance when appropriate, to determine whether material variances between the estimates and actual performance existed. For two agencies, USDA and VA, this comparison was not meaningful, and therefore not performed, because of serious data quality concerns. To determine whether agencies had implemented the practices identified in the Technical Release, we interviewed agencies’ accounting, program, and budget staff and assessed the process agencies used to estimate the cost of their loan programs. We also compared the process agencies used to the practices identified in the Technical Release. Finally, we obtained and reviewed agencies’ most recent action plans to address financial management weaknesses, including those related to estimating credit program costs. We also reviewed the status of agency efforts to ensure that computer systems that provide cash flow data used to estimate the cost of credit programs were Year 2000 compliant. To do this, we met with cognizant agency officials and identified the systems that provide data supporting key cash flow assumptions and determined whether the agency had assured that these systems were currently, or were scheduled to be on time for Year 2000 compliance. We also reviewed the agencies’ Year 2000 compliance plans and status reports to OMB. We did not independently test the systems that provide data supporting key cash flow assumptions to determine whether the systems were Year 2000 compliant as reported to OMB. Further, we discussed the agencies’ efforts to develop contingency plans designed to ensure the continued operation of critical business processes despite system failures. Our work was conducted in Washington, D.C., and St. Louis, Missouri, from September 1997 to November 1998 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Education’s January 12, 1999, letter. 1. See “Agency Comments and Our Evaluation” section for Education. 2. We agree with Education’s statement that the agency uses its budget model in concert with rather than in lieu of the OMB credit subsidy model. The report was revised accordingly. 3. When calculating the portion of Education’s direct loans receivable that was included in the scope of our review, we included the defaulted loan guarantees and facilities loans as part of the total direct loans receivable universe. However, Education does not consider these loans to be direct loans, but does consider them to be part of total credit program receivables. While our review did cover 100 percent of what Education considers to be direct loans, we believe it is appropriate to continue to reflect our scope of loans reviewed as a percentage of total credit program receivables. Further, to be consistent with the information available from other key credit agencies, we revised the direct loan scope percentage to exclude the allowance for subsidy. As a result, the scope calculation for this review was revised to 50 percent of total credit program receivables. 4. We agree with Education’s statement that Stafford Loans should not be included in the program titles and revised the report accordingly. The following are GAO’s comments on the Department of Veterans Affairs’ January 15, 1999, letter. 1. See the “Agency Comments and Our Evaluation” section for VA. 2. We do not agree that VA has already corrected the accounting issues discussed in this report. In September 1998, we visited VA’s servicer and concluded that the problems described in this report—including the servicer’s inability to monitor the amount and timing of cash due from borrowers, its inaccurate and incomplete loan histories, and its inability to allocate payments received from borrowers to the appropriate borrower’s account—continued to exist. 3. When reviewing VA’s action plan, we focused only on those initiatives designed to address the basic accounting weaknesses VA had with its credit programs because this was the scope of our review. Thus, we did not focus on other actions described by VA because they were beyond the scope of this review. 4. While the specific scope of this review was the Guaranty and Indemnity Fund and the Loan Guaranty Direct Loan Program, VA provided us with one model for its guaranteed loan program and one model for its direct loan program that included both the “vendee” and acquired loans. However, only one subsidy cost is produced per model. Since the direct loan model included both types of loans, we could not determine the amount of subsidy attributable to each loan type. As a result, we reviewed 100 percent of VA’s post credit reform loan guarantees and 69 percent of its total loans receivable. To further clarify this, the report was revised to better describe our scope determination. 5. This report does not focus on VA’s “non-established loans.” However, this is being covered in a review that is now ongoing. 6. The $3 million referred to in our report relates to monthly loan payments or loan payoff amounts that VA received which it could not match to a borrower. The existence of this condition calls into question the completeness of VA’s loan records and seriously undermines its ability to monitor the performance of existing loans and reasonably estimate the future performance of its credit programs. The following are GAO’s comments on the Department of Agriculture’s January 11, 1999, letter. 1. We agree that USDA needs to determine the appropriate amount of detailed history needed to make reasonable predictions of future loan performance. The amount of history needed for each loan program would likely vary by program type and complexity and be closely linked to the quality and type of history agencies had available. However, we do not agree that loan history is of limited value where program or economic changes have occurred. Historical experience should be used as the baseline for an agency’s credit subsidy estimate. Once this baseline is established, the incremental changes in cash flows due to expected changes in the current and forecasted economic conditions as well as changes in program design or delivery should be adjusted for. In addition, USDA should compare the estimates of loan performance for the changed credit program to the most recent historical experience to ensure that current estimates are reasonably predicting actual loan program performance. However, as discussed in our report, USDA was not routinely comparing its estimates of loan program costs with actual historical experience. Further, we agree that extensive pre-credit reform detailed loan history may not be required in all cases and, in some cases, reliable summary level information may be acceptable. However, because nearly 73 percent of USDA’s reported loan portfolio is comprised of pre-credit reform loans, we believe that this experience is relevant to USDA’s current estimates of loan performance and, therefore, should be considered at some level. 2. While we agree that USDA has made system changes to help control funds, track cohorts, and respond to financial and budgetary reporting needs, further work is needed. USDA officials told us that the current systems configuration does not allow the systems to summarize the data so that it is readily usable for calculating credit subsidy estimates. Until these systems are able to readily provide reliable key cash flow data in a format that can be easily used in the subsidy estimation process, USDA’s ability to calculate reasonable credit subsidy estimates will continue to be impaired. 3. We agree that some progress has been made by USDA in the past 2 years in addressing the challenges it faces preparing reasonable estimates of its loan program costs; however, the benefits of these and planned future actions have not yet been fully realized. Further, as explained in comment 2, until USDA’s systems readily provide reliable supporting data for the credit subsidy estimates, the ultimate success of these actions may be jeopardized. 4. Although we acknowledge that USDA has a large number of loan programs, the amount of work needed to prepare reasonable credit subsidy estimates can be reduced by optimizing its computer systems’ abilities and appropriately configuring these systems to readily provide reliable data for the loan cost estimation process. 5. We did not intend to imply, and did not state in our draft report, that the note interest rate varied based on the borrower’s income. To avoid further confusion on this point, the report was revised to clarify that the amount of interest subsidy paid by USDA changes when a borrower’s income changes. Further, the lack of support for USDA’s interest subsidy assumption for this loan program could affect the estimates of the amount of interest subsidy that would be recaptured (the amount of interest subsidy a borrower may be required to repay upon sale of the property). 6. The Year 2000 section of this report focused on the status of systems managed at the USDA agency level and did not address systems managed by components such as Rural Development. We did not verify the status of Rural Development’s Year 2000 compliance efforts. 7. The report was revised to reflect the agency’s comment. 8. See “Agency Comments and Our Evaluation” section for USDA. 9. We do not agree that because of ever-evolving credit reform standards that agencies appear to be in a no-win situation and oversight agencies continue to raise the bar on their expectations. The requirements and standards have changed little since the Federal Credit Reform Act became effective in 1992 and the related accounting standards became effective in 1994. Since this time, OMB, Treasury, and the Accounting and Auditing Policy Committee’s credit reform task force (which included representatives from USDA) have been working to help provide agencies with detailed guidance on how to implement credit reform requirements. Further, as demonstrated by the Small Business Administration and the Department of Education, some agencies are able to prepare reasonable credit subsidy estimates. Finally, when forecasting loan repayments for a credit program whose performance is directly linked to economic conditions, econometric models are an appropriate tool because they would consider the impact of economic conditions on estimated future loan repayments. Econometric modeling techniques are not new and have been successfully used by at least one of the five key credit agencies in their estimation processes. Credit Reform: Greater Effort Needed to Overcome Persistent Cost Estimation Problems (GAO/AIMD-98-14, March 30, 1998). Credit Reform: Review of OMB’s Credit Subsidy Model (GAO/AIMD-97-145, August 29, 1997). Credit Subsidy Estimates for the Sections 7(a) and 504 Business Loan Programs (GAO/T-RCED-97-197, July 15, 1997). Credit Reform: Case-by-Case Assessment Advisable in Evaluating Coverage, Compliance (GAO/AIMD-94-57, July 28, 1994). Federal Credit Programs: Agencies Had Serious Problems Meeting Credit Reform Accounting Requirements (GAO/AFMD-93-17, January 6, 1993). Year 2000 Computing Crisis: Significant Risks Remain to Department of Education’s Student Financial Aid Systems (GAO/T-AIMD-98-302, September 17, 1998). Year 2000 Computing Crisis: Strong Leadership and Effective Partnerships Needed to Reduce Likelihood of Adverse Impact (GAO/T-AIMD-98-277, September 2, 1998). Year 2000 Computing Crisis: Progress Made in Compliance of VA Systems, But Concerns Remain (GAO/AIMD-98-237, August 21, 1998). Year 2000 Computing Crisis: Business Continuity and Contingency Planning (GAO/AIMD-10.1.19, August 1998). Year 2000 Computing Crisis: Actions Needed on Electronic Data Exchanges (GAO/AIMD-98-124, July 1, 1998). Year 2000 Computing Crisis: USDA Faces Tremendous Challenges in Ensuring That Vital Public Services Are Not Disrupted (GAO/T-AIMD-98-167, May 14, 1998). Year 2000 Computing Crisis: Strong Leadership Needed to Avoid Disruption of Essential Services (GAO/T-AIMD-98-117, March 24, 1998). Veterans Affairs Computers Systems: Actions Underway Yet Much Work Remains to Resolve Year 2000 Crisis (GAO/T-AIMD-97-174, September 25, 1997). Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14, September 1997). Veterans Benefits Computer Systems: Uninterrupted Delivery of Benefits Depends on Timely Correction of Year 2000 Problems (GAO/T-AIMD-97-114, June 26, 1997). Veterans Benefits Computers Systems: Risk of VBA’s Year 2000 Efforts (GAO/AIMD-97-79, May 30, 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO provided information on the Small Business Administration's (SBA), the Department of Education's, the Department of Housing and Urban Development's (HUD), the Department of Veterans Affairs' (VA), and the Department of Agriculture's (USDA) abilities to reasonably estimate the cost of their loan programs, focusing on: (1) whether they used practices identified by the Credit Reform Task Force as being effective in making these estimates; and (2) the status of the agencies' efforts to ensure that computer systems used to estimate the cost of credit programs are year 2000 compliant. GAO noted that: (1) the problems agencies faced in making credit subsidy estimates as required by Federal Credit Reform Act and federal accounting standards stemmed largely from their lack of: (a) reliable historical data upon which to base estimates of future loan performance; (b) adequate systems that have the capability to track the required information; (c) sound cash flow models; and (d) appropriate policies and procedures for ensuring the accuracy of data used to generate the estimates; (2) SBA was one of the two agencies able to make reasonable estimates of the cost of its loan programs in its fiscal year (FY) 1997 financial statements, primarily because the agency maintained reliable records of historical loan performance data; (3) however, SBA made significant errors in calculating its reestimates of loan program costs; (4) these errors were adjusted for in SBA's draft financial statements, thereby allowing for an unqualified audit opinion on those statements; (5) Education was able to prepare reasonable credit program estimates for its FY 1997 financial statements based on information obtained through a significant data gathering effort from its guaranty agencies; (6) however, the audited estimates differed significantly from the estimates based on data from Education's database, which raises questions about the validity of Education's database; (7) HUD was unable to provide adequate supporting data for its FY 1997 financial statement estimates of its loan program costs, which resulted in a qualified audit opinion from HUD's Inspector General on those financial statements; (8) HUD has developed an action plan to address identified financial management issues related to the loan cost estimation process; (9) VA faced significant problems performing routine accounting for its loan programs including loss of accountability over certain loans transferred to an outside servicer; (10) USDA was unable to make reasonable financial statement estimates of its loan programs' costs because it had not maintained the necessary historical data and continued to use computer systems that were not appropriately configured to capture the data necessary to make such estimates; (11) the five key credit agencies also face the challenge of addressing the year 2000 problem related to systems used in the loan cost estimation process; and (12) according to agency officials, for the 10 loan programs reviewed, all of the mission critical systems are either year 2000 compliant or are scheduled to be compliant by March 31, 1999. |
Safeguarding government computer systems and sensitive information, including personally identifiable information (PII) that resides on them, is an ongoing challenge due to the complexity and interconnectivity of systems, the ease of obtaining and using hacking tools, the steady advances in the sophistication and effectiveness of attack technology, and the emergence of new and more destructive attacks. To help address this challenge, federal agencies, regardless of their size, must abide by federally mandated standards, guidelines, and requirements related to federal information systems. FISMA established a framework designed to ensure the effectiveness of security controls for information and information systems that support federal operations and assets. FISMA assigns specific responsibilities to (1) OMB, to develop and oversee the implementation of policies, principles, standards, and guidelines on information security (except with regard to national security systems); to report, at least annually, on agency compliance with the act; and to approve or disapprove agency information security programs; (2) agency heads, to provide information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of information collected or maintained by or on behalf of the agency; (3) agency heads and chief information officers, to develop, document, and implement an agency-wide information security program; (4) inspectors general, to conduct annual independent evaluations of agency efforts to effectively implement information security; and (5) the National Institute of Standards and Technology (NIST), to develop standards and guidance to agencies on information security. More specifically, FISMA requires each agency to develop, document, and provide an information security program that includes the following components: periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; policies and procedures that (1) are based on risk assessments, (2) cost-effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; subordinate plans for providing adequate information security for networks, facilities, and systems or group of information systems, as appropriate; security awareness training to inform personnel of information security risks and of their responsibilities in implementing agency policies and procedures, as well as training personnel with significant security responsibilities for information security; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in the information security policies, procedures, and practices of the agency; procedures for detecting, reporting, and responding to security plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. FISMA also gives OMB responsibility for ensuring the operation of a federal information security incident center. Established in 2003, the United States Computer Emergency Readiness Team (US-CERT) is the federal information security incident center mandated by FISMA. US- CERT consults with agencies on cyber incidents, provides technical information about threats and incidents, compiles the information, and publishes it on its website, https://www.us-cert.gov. In the 11 years since FISMA was enacted, executive branch oversight of agency information security has changed. As part of its FISMA oversight responsibilities, OMB has issued annual guidance to agencies on implementing FISMA requirements, including instructions for agency and inspector general reporting. However, in July 2010, the Director of OMB and the White House Cybersecurity Coordinator issued a joint memorandum stating that DHS was to exercise primary responsibility within the executive branch for the operational aspects of cybersecurity for federal information systems that fall within the scope of FISMA. The memorandum stated that DHS’s activities would include overseeing the government-wide and agency-specific implementation of and reporting on cybersecurity policies and guidance; overseeing and assisting government-wide and agency-specific efforts to provide adequate, risk-based, and cost-effective cybersecurity; overseeing the agencies’ compliance with FISMA and developing analyses for OMB to assist in the development of the FISMA annual report; overseeing the agencies’ cybersecurity operations and incident response and providing appropriate assistance; and annually reviewing the agencies’ cybersecurity programs. Within DHS, the Federal Network Resilience Office, within the National Protection and Programs Directorate, is responsible for (1) developing and disseminating most FISMA reporting metrics, (2) managing the CyberScope web-based application, and (3) collecting and reviewing federal agencies’ cybersecurity data submissions and monthly data feeds to CyberScope. In addition, the office is responsible for conducting cybersecurity reviews and assessments at federal agencies to evaluate the effectiveness of agencies’ information security programs. The primary laws that require privacy protections for personal information maintained, collected, used, or disseminated by federal agencies are the Privacy Act of 1974 and the E-Government Act of 2002. The Privacy Act places limitations on agencies’ collection, maintenance, disclosure, and use of PII maintained in systems of records, including requirements for each agency to (1) maintain in its records only such information about an individual as is relevant and necessary to accomplish a purpose of the agency required by statute or by executive order of the President; (2) establish rules of conduct for persons involved in the design, development, operation, or maintenance of any system of records, or in maintaining any record, and instruct each such person in those rules and the requirements of the act; and (3) establish appropriate administrative, technical, and physical safeguards to ensure the security and confidentiality of records and to protect against any anticipated threats or hazards to their security or integrity that could result in substantial harm, embarrassment, inconvenience, or unfairness to any individual on whom information is maintained. Additionally, when an agency establishes or makes changes to a system of records, it must notify the public through a system of records notice in the Federal Register that includes the categories of data collected, the categories of individuals about whom information is collected, the intended “routine” uses of data, and procedures that individuals can use to review and correct personally identifiable information. In addition, the E-Government Act of 2002 requires agencies to assess the impact of federal information systems on individuals’ privacy. Specifically, the E-Government Act strives to enhance the protection of personal information in government information systems by requiring that agencies conduct privacy impact assessments (PIA) for systems or collections containing personal information. According to OMB guidance, the purpose of a PIA is to (1) ensure handling conforms to applicable legal, regulatory, and policy requirements regarding privacy; (2) determine the risks and effects of collecting, maintaining, and disseminating information in identifiable form in an electronic information system; and (3) examine and evaluate protections and alternative processes for handling information to mitigate potential privacy risks. Small agencies provide a variety of services and manage a variety of federal programs. According to OMB, their responsibilities include issues concerning commerce, trade, energy, science, transportation, national security, finance, and culture. Approximately half of the small agencies in the federal government perform regulatory or enforcement roles in the executive branch. For example, the National Archives and Records Administration oversees the federal government’s recordkeeping and ensures preservation of and access to records. In addition, the Federal Reserve Board assists with implementing the monetary policy of the United States. The Federal Reserve Board also plays a major role in the supervision and regulation of the U.S. banking system. The remaining small federal agencies are largely grant-making, advisory, and uniquely chartered organizations. For example, the United States Institute of Peace is an independent, nonpartisan institution established and funded by Congress to increase the nation’s capacity to manage international conflict without violence. Together, small agencies employ about 90,000 federal workers and manage billions of taxpayer dollars. Similarly, the six selected agencies in our review provide a broad range of federal services (see table 1). Small federal agencies have reported a number of incidents that have placed sensitive information at risk, with potentially serious impacts on federal operations, assets, and people. According to DHS, the number of reported security incidents for small agencies from fiscal year 2009 to fiscal year 2013 ranged from 2,168 to 3,144. Incidents involving PII at small agencies increased from 258 in fiscal year 2009 to 664 in fiscal year 2013. In addition, in fiscal year 2013, small agencies reported 2,653 incidents to US-CERT. Table 2 describes the incident categories as defined by US-CERT. As shown in figure 1, the three most prevalent types of incidents reported by small agencies to US-CERT during fiscal year 2013 were those involving potentially malicious or anomalous activity (investigation), the execution or installation of malicious software (malicious code), and the violation of acceptable computing use policies (improper usage). Although the small agencies we reviewed have taken steps to develop information security and privacy programs, weaknesses existed that threatened the confidentiality, integrity, and availability of their information and systems. Regarding information security, these agencies did not fully or effectively develop, document, and implement security plans, policies, and procedures, as well as other elements of an information security program such as incident handling and contingency planning. A key reason for these weaknesses is that these small agencies have not yet fully implemented their agency-wide information security programs to ensure that controls are appropriately designed and operating effectively, and two of the six agencies did not develop an information security program that included any of the required FISMA elements. In addition, five of the six selected agencies had not fully implemented their privacy programs to ensure protection of PII. For example, while most of the six agencies designated a privacy official, not all the agencies completed privacy impact assessments. Further, two of the six agencies we reviewed had not implemented any of the selected privacy requirements. As a result, these selected agencies have limited assurance that their PII and information systems are being adequately protected against unauthorized access, use, disclosure, modification, disruption, or loss. The six small agencies we reviewed have generally developed many of the requirements of an information security program, but these programs have not been fully implemented. Specifically, four of the six agencies have developed an information security program that includes risk assessments, security policies and procedures, system security plans, security awareness training, periodic testing and evaluation, remedial action plans, incident handling, and contingency planning. However, key elements of their plans, policies, or procedures in these areas were outdated, incomplete, or did not exist. In addition, two of the six agencies did not develop an information security program with the required FISMA elements. FISMA requires each agency to develop, document, and implement an information security program that includes periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems. According to NIST’s Guide for Conducting Risk Assessments, risk is determined by identifying potential threats to the organization, identifying vulnerabilities in the organization’s systems, determining the likelihood that a particular threat may exploit vulnerabilities, and assessing the resulting impact on the organization’s mission, including the effect on sensitive and critical systems and data. NIST guidance states that risk assessments should include essential elements such as discussion of threats, vulnerabilities, impact, risk model, and likelihood of occurrence, and be updated based on a frequency defined by the organization. Four of the six selected agencies developed and conducted risk assessments. For example, one agency’s risk assessment generally adhered to NIST guidance for conducting risk assessments. Specifically, it included information related to the identification of threats, vulnerabilities, and impacts, and recommended corrective actions for mitigating or eliminating the threats and vulnerabilities that were identified. However, the risk assessment did not identify the assumptions and constraints associated with the assessment. Another agency developed a risk management framework and documented a risk assessment policy but had not completed risk assessments for its systems. In addition, risk assessments at the four agencies were outdated or did not include elements outlined in NIST guidance, as the following examples illustrate. At one selected agency, risk assessments for the four systems reviewed were not updated based on the agency’s policy of updating its risk assessments annually. Specifically, risk assessments for three of the four systems had not been conducted since 2005, 2009, and 2010, respectively. While the remaining system had an assessment conducted in 2013, the prior assessment for that system was done in 2010. Additionally, risk assessments for three of the four systems lacked essential elements such as a list of vulnerabilities unique to the individual systems, and one of the assessments did not assess the likelihood of an incident occurring or determine the risk level. The fourth assessment, which was dated 2005, was updated during our review but did not address threats, vulnerabilities, and likelihood of incident occurrence or risks. Agency officials stated that while the risk assessments were outdated, they have conducted informal and formal risk assessments that were not documented. The agency plans to formalize and document its risk assessments to align with its own policies and NIST standards by June 2014. Another agency in our review did not identify in its risk assessments the system threats and vulnerabilities, and did not recommend corrective actions for mitigating the threats and vulnerabilities for the three systems we reviewed. According to agency officials, new risk assessments will be conducted for all three of the systems we reviewed in 2014. The remaining two agencies, which did not conduct risk assessments for their systems, cited various reasons for not completing them. One agency stated it was not aware of the requirement to conduct risk assessments. The other agency stated that it received a waiver from OMB for complying with FISMA requirements. According to OMB officials, they have not granted FISMA waivers to any federal agency and FISMA does not allow for waivers. Without current, complete risk assessments, agencies are at an increased probability of not identifying all threats to operations and may not be able to mitigate risks to a level appropriate to meet minimum requirements. A key element of an effective information security program, as required by FISMA, is to develop, document, and implement risk-based policies and procedures that govern the security over an agency’s computing environment. According to NIST, an organization should develop, document, and disseminate (1) a policy that addresses purpose, scope, roles, responsibilities, management commitment, coordination among organizational entities, and compliance, and (2) procedures to facilitate the implementation of the policy and associated controls. Procedures are detailed steps to be followed by users, system operations personnel, or others to accomplish a particular task. If properly implemented, policies and procedures may be able to effectively reduce risk to the information and information systems. Four of the six small agencies we reviewed had documented information security policies and procedures, and two did not. For example, in fiscal year 2012, one of the selected agencies documented policies that addressed each of the FISMA elements as a part of its information security program. Another agency had policies addressing risk assessments, security plans, security awareness and training, periodic testing and evaluation, remedial actions, incident response, and contingency planning. However many, but not all, of the policies and procedures documented by the six agencies were either outdated, incomplete, or did not exist (see fig. 2). For instance, agency 1 had information security policies that had not been updated since 2001. During our review, the agency hired a contractor to develop a new information technology (IT) security framework based on NIST guidance, with a planned completion date of the end of 2014. According to an agency official, a new entity-wide information security policy was documented and implemented in December 2013. We reviewed a copy of the policy and determined it addressed each of the eight elements of an information security program mandated by FISMA. Agencies 2 and 4 had not developed, documented, or implemented any information security policies or procedures. They stated that it did not have a true understanding of information security program requirements. According to officials at one of these agencies, they had not developed policies or procedures because they were not aware of these requirements and lacked the technical staff to address this area. Agency 3 documented a policy for incident handling but lacked procedures. According to an official at this agency, the agency uses a NIST checklist as its documented procedures. However, according to NIST, the actual steps performed may vary based on the type of incident and the nature of individual incidents. Agency 5 documented implementation procedures for incident response, but did not document risk assessment procedures. Agency 6 established policies for the seven information security program elements. The agency documented procedures for incident handling and established draft documented procedures for remediation but lacked documented procedures for the remaining elements. According to agency officials, the remaining procedures will be documented by June 2014. Until the selected agencies fully develop and update their policies and procedures to govern the security over their computing environments, they will have limited assurance that controls over their information are appropriately applied to their systems and operating effectively. FISMA requires an agency’s information security program to include plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate. According to NIST, the purpose of the system security plan is to provide an overview of the security requirements of the system and describe the controls in place or planned for meeting those requirements. The first step in the system security planning process is to categorize the system based on the impact to agency operations, assets, and personnel should the confidentiality, integrity, and availability of the agency information and information systems be compromised. This categorization is then used to determine the appropriate security controls needed for each system. Four of the six selected agencies developed system security plans. For example, one agency completed system security plans that identified the categorization level and appropriate security controls, based on NIST 800-53, for each of the four systems reviewed. Another agency also completed security plans and categorizations for the one system we reviewed. However, system security plans for these four agencies were missing elements or outdated. At one agency, while three of the four system security plans we reviewed included items such as system owners and authorizing officials, these plans did not include completion and approval dates. The fourth plan included a completion date but did not have an approval date, and two of the four plans were outdated. One plan had not been updated since 2009, and the other had not been updated since 2011. The agency did not have a standardized template for creating security plans, which led to the inconsistencies in the various plans. The agency plans to standardize its security plans and update plans for three of the four systems selected for review by June 2014. The fourth system will be replaced and retired by June 2014. Another agency developed system security plans for three of its systems. However, two of the three were outdated. One plan has not been updated since 2009, and the other has not been updated since 2011. According to agency officials, the agency plans to update all three system security plans in 2014. A third agency divided its general support system into 21 systems and major applications. In fiscal year 2013, it completed security plans and categorizations for 1 of its systems. According to an agency official, the security plan for another system was completed in fiscal year 2014 and the security plans for the remaining 19 systems and major applications are scheduled to be completed by March 2015. A fourth agency developed and documented a system security plan but referenced policies and procedures from February 2001. According to an agency official, the security plan will be updated to address the appropriate security controls and reflect the agency’s new IT security policy. Finally, the remaining two agencies had not considered the need for system security plans for their systems. Agency officials at both agencies stated they were unaware of this requirement; as a result, they did not take steps to determine if a system security plan was needed for their systems. Until these selected agencies appropriately develop and update system security plans, they may face an increased risk that officials will be unaware of system security requirements and that controls are not in place. FISMA requires agencies to provide security awareness training to personnel, including contractors and other users of information systems that support the operations and assets of the agency. Training is intended to inform agency personnel of the information security risks associated with their activities, and their responsibilities in complying with agency policies and procedures designed to reduce these risks. FISMA also requires agencies to provide specialized training to personnel with significant security responsibilities. Providing training to agency personnel is critical to securing information and information systems since people are one of the weakest links in attempts to secure systems and networks. Four of the six selected agencies developed a security awareness training program, and one of these four agencies completed specialized training for employees with significant security responsibilities. One of the four agencies implemented a new web-based security awareness training program in 2013. This agency trained 100 percent of its employees. However, the agency did not have specialized security training for the individuals with significant security responsibilities. According to agency officials, the agency obtained funds and purchased specialized training and plans to complete this training in 2014. Another agency updated its security awareness program in fiscal year 2013, and 100 percent of its users completed annual security awareness training. The agency developed specialized training, but not all required individuals with significant security responsibilities had taken it. According to officials, the agency’s tracking of specialized training is not automated and it has been difficult to get all required employees together to take the training. Specialized training was identified as an issue in the agency’s fiscal year 2012 inspector general report, and the agency is working to establish goals for a more comprehensive tracking system for its specialized training. A third agency developed a security awareness program and trained 95 percent of its users. According to agency officials, users who did not complete the training were either interns that completed the initial training, external auditors, executives, or remote users. In addition, we found that four out of nine users requiring specialized training did not take it in fiscal year 2013. According to an agency official, insufficient funding was the reason that the users did not take the required training. The agency plans for the users to take specialized training in fiscal year 2014. The fourth agency trained 100 percent of its users during fiscal year 2013. We found that users requiring specialized security training received it during fiscal year 2013. The remaining two selected agencies had neither conducted annual security awareness training for all of their employees nor provided specialized training for security personnel. Officials at one of the agencies stated that two of its employees received security awareness training through another federal agency, but its remaining employees had not received such training. Officials at the other agency stated that the agency does not conduct any formal security awareness training due to its small size. Without fully developing and implementing a security awareness program, including training for users with significant security roles, the selected agencies may not have the proper assurance that their personnel have a basic awareness of information security issues and agency security policies and procedures. In addition, agencies that did not provide specialized training may not have reasonable assurance that staff with significant system security roles have the adequate knowledge, skills, and abilities consistent with their roles to protect the confidentiality, integrity, and availability of the information housed within the information systems to which they are assigned. FISMA requires that federal agencies periodically test and evaluate the effectiveness of their information security policies, procedures, and practices as part of implementing an agency-wide security program. This testing is to be performed with a frequency depending on risk, but no less than annually. Testing should include management, operational, and technical controls for every system identified in the agency’s required inventory of major systems. Four of the six selected agencies conducted periodic testing and evaluation of their systems. However, their tests were incomplete and not conducted at least annually, as required. The following examples illustrate these weaknesses: One agency documented that security assessments were conducted for the three systems reviewed, but the assessments did not clearly identify which management, operational, and technical controls were tested or reviewed. Additionally, the controls for the three systems had not been tested or reviewed at least annually. Specifically, one system was last tested in December 2008 and the other two systems were last tested in September 2009 and October 2010, respectively. According to an agency official, the security assessments will be updated in 2014. At another agency, security tests and evaluations were conducted as a part of the system assessment and authorization process. According to agency officials, the agency completed the security test and evaluations for 2 of its 21 systems and major applications in 2013. It plans to complete the remaining 19 assessment and authorizations by March 2015. A third agency hired an independent contractor in fiscal year 2012 to test or review management, operational, and technical controls for its general support system. However, the contractor did not test all controls for the system. According to an agency official, controls not tested were not within the contracted scope of the assessment. The agency plans to conduct a security assessment and authorization for its new system in fiscal year 2014. The fourth agency lacked sufficient documentation to show that assessments were performed annually. For example, one of the systems selected for review was last tested in 2010 or 2011. The assessments for the other two systems did not identify when the testing of controls occurred, and the agency could not provide documentation to show when it occurred. Further, two of the six selected agencies did not have periodic testing and evaluation programs and did not test the security controls of their systems. According to those agency officials, it was not clear that this was an area that needed to be addressed. Without appropriate test and evaluation, agencies may not have reasonable assurance that controls over their systems are being effectively implemented and maintained. FISMA requires agencies to plan, implement, evaluate, and document remedial actions to address any deficiencies in their information security policies, procedures, and practices. In its fiscal year 2012 and 2013 FISMA reporting instructions, OMB emphasized that remedial action plans––known as plans of action and milestones (POA&M)––are to be the authoritative agency-wide management tool for addressing information security weaknesses. In addition, NIST guidance states that federal agencies should develop a POA&M for information systems to document the organization’s planned remedial actions to correct weaknesses or deficiencies noted during the assessment of the security controls and to reduce or eliminate known vulnerabilities in the system. NIST guidance also states that organizations should update existing POA&Ms based on the findings from security controls assessments, security impact analyses, and continuous monitoring activities. According to OMB, remediation plans assist agencies in identifying, assessing, prioritizing, and monitoring the progress of corrective efforts for security weaknesses found in programs and systems. Four of the six selected agencies documented remedial action plans to address identified weaknesses. For instance, one of the agencies documented remedial action plans and included weaknesses identified from security assessments in the POA&M for one of its systems. At another agency, remedial actions to correct weaknesses noted during its assessment were documented. While these four agencies documented remedial action plans, plans were missing elements as required by OMB. For example, one agency’s POA&Ms lacked either estimated completion dates or the actual completion date of corrective actions that remediated identified weaknesses. Another agency’s POA&Ms lacked elements such as estimated funding sources, severity ratings, milestone completion dates, or changes to milestone completion dates where applicable. Further, two of the six selected agencies did not develop or document remedial action plans. According to agency officials, neither agency was aware of the requirements to document remedial actions. Without an effective process for planning, implementing, evaluating, and documenting remedial actions, these agencies cannot ensure they are addressing deficiencies in their information security policies, procedures, and practices. FISMA requires that agency security programs include procedures for detecting, reporting, and responding to security incidents, including reporting incidents to US-CERT. According to NIST, agencies should create an incident response policy and use it as the basis for incident response procedures. The procedures should then be tested to validate their accuracy and usefulness. The ability to identify incidents using appropriate audit and monitoring techniques enables an agency to initiate its incident response plan in a timely manner. Once an incident has been identified, an agency’s incident response procedures should provide the capability to correctly log the incident, properly analyze it, and take appropriate action. Four of the six small agencies we reviewed had taken steps to develop policies and procedures as required by FISMA and recommended by NIST guidance for incident handling. Specifically, these agencies’ policies and procedures included incident response policies or plans, incident response team policy, procedures for US-CERT notification, and escalation procedures for information security incidents. One agency, for example, had documented policy and procedures for detecting, reporting, and responding to security incidents that required personnel to report incidents involving personally identifiable information to the Chief Information Officer within 1 hour, and all other types of incidents to the agency’s Security Officer. However, these four agencies had not fully documented or tested their incident response policies and procedures. For example: One agency had not updated its incident response policy and plan since 2001. During the course of our review, in December 2013, the agency updated its incident response policy. According to agency officials, incident management is currently an ad hoc process. Incident management will be included in agency-wide procedures due to be completed in 2014. Between fiscal year 2011 and 2013, the agency reported one incident to US-CERT. Another agency has developed and documented an incident response policy but has not documented procedures for responding to security incidents. According to agency officials, the agency is in the process of developing and documenting an incident response plan with procedures. The agency has taken these actions to improve its incident detection and reporting capabilities and awarded a contract to acquire services to both improve and support these capabilities. According to agency officials, this agency reported one incident from fiscal year 2011 to fiscal year 2013. The third agency had documented policies and procedures for its incident response program but had not followed its own policy for testing the incident response plan. According to an agency official, members of the team were aware of the plan and its procedures. Between fiscal year 2011 and fiscal year 2013, this agency reported six incidents to US-CERT. The fourth agency had documented policies and procedures for its incident response program but had not followed its policy for testing its incident response practices. While the agency did not perform testing in 2012, it did test its incident response capability in 2013. According to agency officials, the agency reported eight incidents in fiscal year 2012 and fiscal year 2013. Furthermore, two of the six selected agencies had not developed or documented policies or procedures for incident response. According to officials of one of the agencies, the only incidents it experienced are viruses, and its ad hoc process is to remove the virus from the laptop. If it cannot be removed, the agency replaces the laptop. At the second agency, officials stated that they had one known incident, which they believed was a phishing attack. According to an agency official, incidents would be reported or handled by their contractor. However, the contractor could not demonstrate that it had documented incidents or procedures for responding to incidents. According to officials for both agencies, no incidents were reported to US-CERT from fiscal year 2011 through fiscal year 2013. The agencies currently do not have plans to create documented incident response plans or procedures. Without effective policies and procedures, these agencies may be hampered in their ability to detect incidents, report incidents to authorities such as US-CERT, minimize the resultant loss and destruction, mitigate the exploited weaknesses, and restore services. FISMA requires federal agencies to develop and document plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. According to NIST, contingency planning is part of overall information system continuity of operations planning, which fits into a much broader security and emergency management effort that includes, among other things, organizational and business process continuity and disaster recovery planning. These plans and procedures are essential steps in ensuring that agencies are adequately prepared to cope with the loss of operational capabilities due to a service disruption such as an act of nature, fire, accident, or sabotage. According to NIST, these plans should cover all key functions, including assessing an agency’s information technology and identifying resources, minimizing potential damage and interruption, developing and documenting the plan, training personnel in their contingency roles and responsibilities and providing refresher training, and testing them and making necessary adjustments. Four of the six selected agencies developed contingency planning documents. These four agencies took steps to implement FISMA requirements and NIST specifications, but have not fully met all requirements. For example: One agency had developed a draft contingency plan for the one system we reviewed but had not yet finalized or approved it. The agency also did not follow its own procedures and did not test the contingency plan. According to agency officials, emergency response training was provided to staff 2 years ago, and its staff meets every few months to ensure that all individuals are aware of their responsibilities in case of an emergency. The agency plans to finalize and test the plan but did not have a final date by when this would be done. Another agency completed and tested its disaster recovery plan in fiscal year 2013. However, it has not provided contingency training to its employees or defined the frequency with which training should be conducted. The agency is scheduled to complete these items in December 2014. A third agency had documented a continuity of operations plan that contained a disaster recovery plan. However, contingency plans were not developed or tested for its three information systems. Additionally, according to one agency official, the disaster recovery plan for the agency is outdated. According to the agency’s inspector general FISMA report for fiscal year 2013, the agency did not test the plan in 2013 due to competing demands (e.g., a pending office move and launch of a new software program). According to agency officials, the agency intends to reinstitute the annual test exercises in fiscal year 2014. The inspector general’s report noted that the agency implemented the core policies and procedures associated with contingency planning, including the creation of a business continuity plan, disaster recovery plan, continuity of operations plan exercises, signature of an alternate processing site agreement, and data backups. According to an agency official, the plans will be updated once the agency moves to its new location in fiscal year 2014. Additionally, the fourth agency’s inspector general identified contingency planning as a weakness in fiscal year 2012. The inspector general reported that the agency did not have a final contingency plan or disaster recovery plan. In addition, the agency lacked a disaster recovery site and did not appropriately test its contingency plan. In fiscal year 2013, the inspector general reported that the agency (1) initiated a program to establish an enterprise-wide business continuity/disaster recovery program, (2) planned to have a disaster recovery site by the end of fiscal year 2014, and (3) tested its draft contingency plan and disaster recovery. In March 2014, the agency finalized its contingency plan and disaster recovery plan. Further, two of the six agencies have not developed contingency plans. According to an official at one of the agencies, the data used for their work are stored on the individual’s laptop and each employee is required to back up their data. If the laptop or data are lost, the employee is responsible for restoring the data from the back-up. Otherwise, the employee would have to recreate the data. Without formal back-up procedures, the agency is at risk for lost data. Officials at the other agency stated that they did not have concerns about the potential loss of operations. If they were unable to operate, they would still be able to process payments and collect data since those operations are handled by another federal agency and contractor. The uneven implementation of a comprehensive continuity of operations program by the six agencies could lead to less effective recovery efforts and may prevent a successful and timely system recovery when service disruptions occur. Additionally, without appropriate testing, these agencies cannot ensure they can adequately recover from a disaster. In a separate report for limited official use only, we are providing specific details on the weaknesses in the six selected agencies’ implementation of information security requirements. The major statutory requirements for the protection of personal privacy by federal agencies are the Privacy Act of 1974 and the privacy provisions of the E-Government Act of 2002. In addition, FISMA, which is included in the E-Government Act of 2002, addresses the protection of personal information in the context of securing federal agency information and information systems. Beyond these laws, OMB and NIST have issued guidance for assisting agencies with implementing federal privacy laws. According to the Privacy Act, each agency that maintains a system of records shall, among other things, maintain in its records only such information about an individual as is relevant and necessary to accomplish a required purpose of the agency. Additionally, when an agency establishes or makes changes to a system of records, it must notify the public through a system of records notice in the Federal Register. The notice should include items such as the categories of data collected, the categories of individuals about whom information is collected, the intended “routine” uses of data, and procedures that individuals can use to review and correct personally identifiable information. According to OMB guidance, system of records notices should also be up to date. The E-Government Act requires that agencies conduct privacy impact assessments (PIA) for systems or collections containing personal information. In addition, agencies must ensure the review of the PIA and, if practicable, make the PIA publicly available through the agency’s website, publication in the Federal Register, or other means. OMB guidance elaborates on the PIA process by stating, for example, that agencies are required to conduct PIAs when a system change creates new privacy risks (e.g., changing the way in which personal information is being used). According to OMB, the PIA requirement does not apply to all systems. For example, no assessment is required when the information collected relates to internal government operations, the information has been previously assessed under an evaluation similar to a PIA, or when privacy issues are unchanged. The Privacy Act states that agencies must establish rules of conduct for persons involved in the design, development, operation, or maintenance of any systems of records, and establish appropriate administrative, technical, and physical safeguards to ensure the security and confidentiality of records. According to NIST, privacy controls are the administrative, technical, and physical safeguards employed within organizations to protect and ensure the proper handling of PII. Accountability and commitment to the protection of individual privacy includes the appointment of a senior agency official for privacy, as required by OMB. The senior agency official should have overall responsibility for ensuring the agency’s implementation of information privacy protections, including the agency’s full compliance with federal laws, regulations, and policies relating to information privacy, such as the Privacy Act. The six small agencies we reviewed had made mixed progress in implementing these selected privacy requirements, as the following illustrates: Issue system of records notices: Most of the small agencies reviewed did not consistently issue notices. One agency appropriately issued system of records notices, two agencies posted notices that were no longer current, and three agencies did not issue any notices for systems requiring them. Of the two agencies with out-of-date system of records notices, one agency is determining which information systems contain information that will require system of records notices. Consequently, an official from this agency stated that the agency needed to update its 2005 notice. Similarly, an official from the other agency stated that the agency’s system of records notices will be updated when the agency moves to a new location in fiscal year 2014. Among the three agencies that did not issue system of records notices, officials at two agencies did not believe that they were responsible for issuing the notices. While one of the agencies did not maintain PII in its system, the agency maintained paper files with PII that was covered by the Privacy Act and thus was responsible for issuing a system of records notice. An official from the second agency believed that other agencies were responsible for completing system of records notices on its behalf. An official from the third agency stated that the agency would revisit system of records notices as part of the reauthorization process for its systems. Conduct privacy impact assessments: Most of the selected small agencies did not consistently conduct privacy impact assessments for all systems containing personally identifiable information. Two agencies conducted privacy impact assessments for systems containing PII. Three agencies did not complete any assessments. The sixth agency was not required to perform an assessment because it did not maintain any systems containing personally identifiable information. Regarding the three agencies that did not complete PIAs, officials offered a variety of reasons for why they were not conducted. An official from one of the three agencies originally stated they did not maintain any information systems containing personal information related to employees or members of the public. However, we determined that this agency’s general support system stored e-mail addresses for members of the general public, and therefore a privacy impact assessment should have been completed. An official from the second agency stated they will determine whether the systems containing PII would need a privacy impact assessment. The third agency did not conduct privacy impact assessments because officials inappropriately believed that a waiver from OMB relieved them from the requirement of preparing privacy impact assessments. However, no waivers exist for conducting privacy impact assessments, and OMB does not issue such waivers. Assign senior official for privacy: Most of the six selected small agencies assigned a senior agency official for privacy who is responsible for ensuring compliance with all applicable laws and regulations regarding the collection, use, maintenance, sharing, and disposal of personally identifiable information by programs and information systems. Specifically, five of the six agencies had assigned an agency official with overall agency-wide responsibility for information privacy issues, while one agency had not. One of the agencies designated a Chief Privacy Officer, while officials from three other agencies stated that other employees or officers, specifically the Chief Operating Officer, the General Counsel, or the Chief Information Officer, were designated to perform the duties of a privacy officer. The fifth agency designated its Management and Program Officer as the agency’s privacy official in 2014. The sixth agency, according to an agency official, did not have many full-time employees and had not identified an agency official responsible for privacy. Incomplete implementation of privacy requirements by five of the six selected agencies may place PII in their systems at risk. The loss of personally identifiable information can result in substantial harm, embarrassment, and inconvenience to individuals and may lead to identity theft or other fraudulent use of the information. In a separate report for limited official use only, we are providing specific details on the weaknesses in the five selected agencies’ implementation of privacy requirements. While OMB and DHS have various responsibilities in overseeing federal agencies’ implementation of information security and privacy requirements, their oversight of small agencies has been limited. Specifically, OMB and DHS are not overseeing all small agencies’ implementation of cybersecurity and privacy requirements. Moreover, OMB is not reporting small agencies’ performance metrics for privacy in its annual FISMA report to Congress. OMB and DHS have provided a variety of guidance and services to assist agencies in meeting security and privacy requirements, including a recently launched DHS initiative aimed at improving small agencies’ cybersecurity. However, the agencies in our review have faced challenges in using the guidance and services, and additional efforts could better position smaller agencies to take advantage of guidance and services offered. FISMA, the Privacy Act, and the E-Government Act include provisions that require OMB to oversee the implementation of the various information security and privacy requirements at all federal agencies. FISMA requires that OMB develop and oversee the implementation of policies, standards, and guidelines on information security at executive branch agencies and annually report to Congress on agencies’ compliance with the act. The Privacy Act gives OMB responsibility for developing guidelines and providing “continuing assistance to and oversight of” agencies’ implementation of the act. The E-Government Act of 2002 also assigns OMB responsibility for developing PIA guidance and ensuring agency implementation of the privacy impact assessment requirement. Since 2010, DHS has assisted OMB in overseeing executive branch agencies’ compliance with FISMA, overseeing cybersecurity operations, and providing related assistance. DHS cybersecurity oversight activities have also included privacy-related matters initiated by OMB in its continuing oversight of the implementation of the Privacy Act and the E-Government Act. In overseeing small agencies’ implementation of information security and privacy requirements, OMB and DHS have instructed the agencies to report annually on a variety of metrics, which are used to gauge implementation of the information security programs and privacy requirements established by the various acts. The metrics cover areas such as risk management, security training, remediation programs, and contingency planning. Over time, these metrics have evolved to include administration priorities and baseline metrics intended to improve oversight of FISMA implementation and federal information security. To report on the annual metrics, all federal agencies use an interactive data collection tool called CyberScope. In its 2013 annual report to Congress on agencies’ implementation of FISMA, OMB reported that small agencies improved their implementation of FISMA capabilities from fiscal year 2012 to fiscal year 2013. For example, in providing security awareness training to users, small agencies increased from 85 percent in fiscal year 2012 to 96 percent in fiscal year 2013. Another area of improvement noted was the capability for controlled incident detection: small agencies increased from 53 percent in fiscal year 2012 to 69 percent in fiscal year 2013. In addition, the number of small agencies reporting to OMB increased from 50 in fiscal year 2012 to 57 in fiscal year 2013. However, as of March 2013, 55 of 129 small agencies registered to use CyberScope had never reported to OMB on the implementation of their information security programs. Further, one of the agencies in our review has never registered to use CyberScope or reported to OMB. The other agency, although initially registering to use CyberScope when it was first developed, never submitted its annual report and last reported to OMB in 2008. According to DHS officials, they report to OMB on which agencies met or did not meet the annual reporting requirement. Further, the list of agencies DHS reports on is limited to those that have registered for CyberScope. DHS officials also stated that reminders are sent to agencies about CyberScope reporting dates. However, DHS officials stated they have no mechanism in place to force agencies to comply with the annual reporting requirement. Establishing a mechanism, such as publishing a list of agencies not meeting the annual reporting requirements, could lead to greater transparency and compliance. With regard to privacy oversight, OMB did not include in its 2013 report to Congress small agencies’ performance in implementing privacy requirements, despite collecting this information. Rather, privacy information was only included for larger agencies. According to OMB officials, privacy data are collected for all agencies through various methods, in addition to CyberScope reporting. These include, for example, E-Government Act section 208 reviews, reviews of system of records notices, and computer matching agreements. OMB officials further stated that it is up to agencies to adhere to privacy requirements and official guidance. However, as discussed earlier, three of the selected agencies in our review had not met privacy requirements. Including data on small agencies’ implementation of privacy requirements in OMB’s annual report to Congress could provide additional transparency and oversight. OMB has provided guidance to federal agencies, including small agencies, on information security and privacy. Specifically, OMB has issued several memorandums intended to guide agencies in implementing FISMA, E-Government Act, and Privacy Act requirements, as well as other cybersecurity and privacy guidance intended to address shortcoming in federal systems and privacy requirements. Table 3 lists examples of key information security and privacy guidance issued by OMB. In addition to guidance, according to OMB officials, OMB regularly works with all agencies to discuss implementation of privacy requirements, both directly and through Chief Information Officer Council meetings. The Privacy Committee of the council is one mechanism used to communicate with agencies. According to OMB officials, agencies with a senior agency official for privacy are invited to attend these meetings, and small agencies may also participate. Further, OMB officials stated that they have separate meetings with small agencies, as appropriate. For example, according to OMB officials, their staff recently gave a detailed talk on privacy requirements to the Small Agency Council—General Counsel Forum. Since 2010, DHS has had responsibilities in accordance with an OMB memorandum for overseeing and assisting federal agency efforts to provide adequate, risk-based, and cost-effective cybersecurity. Its activities have also included a number of privacy-related matters that assist OMB in carrying out its privacy oversight responsibilities. In undertaking these activities, DHS offers a variety of services to assist all federal agencies with implementing aspects of their information security and privacy programs (see table 4). According to DHS, four of the six small agencies in our review used some services offered by the department in fiscal years 2012 and 2013. For example, DHS hosted advisory events in fiscal year 2012 for chief information officers of small agencies. These events covered topics such as continuous monitoring, FISMA, and insider threat briefings, among others. According to DHS officials, two general Chief Information Security Officer (CISO) Advisory Council events were held in fiscal year 2013. Small agencies attended these events. The focus of current events has moved to the Continuous Diagnostics and Mitigation Exercise Evaluation Guide meetings. This is due to the focus on continuous monitoring mandated by OMB. According to DHS officials, this was a natural transition as departments and agencies had more interest in learning about Continuous Diagnostics and Mitigation than in some of the other initiatives. Four of the six agencies in our review used a DHS-offered service to seek clarification and ask questions regarding FISMA issues. Two of the six agencies in our review participated in the National Cybersecurity Assessment and Technical Services for 2013. DHS is working with one agency in our review on recruiting and retaining cybersecurity expertise, providing additional information on insider threats and threat awareness programs, and obtaining clarification on CyberScope reporting. DHS is working with another agency in our review on its risk and vulnerability assessment, remediation strategies, and continuous monitoring policy development. One agency in our review participated in the privacy workshop. While OMB and DHS have provided agencies with guidance through their website, workshops, OMB’s MAX portal, and e-mail distribution lists, the six agencies in our review faced challenges with using the guidance. The following are examples of challenges in using OMB and DHS guidance identified by the small agencies we reviewed: OMB guidance directs agencies to use NIST guidance. However, according to agency officials in our review, since some smaller agencies do not have technical staff, they have difficulty interpreting and implementing the voluminous and technical publications issued by NIST. Two of the six agencies were either not aware of privacy guidance that is available or thought that the agency was not responsible for applying the guidance. OMB and DHS did not provide evidence that they had reached out to all small agencies. As a result, it is not clear whether the six selected agencies were notified of issued privacy guidance. According to OMB officials, due to the large decentralized nature of the federal government, the opportunities to reach out to all federal agencies, whether large or small, are limited. Consequently, OMB distributes its guidance documents to a broad group and posts them on its website for easy access. Similarly, while OMB and DHS offered chief information security officer advisory councils, chief information officer meetings for small agencies, and privacy workshops to all federal agencies, the six small agencies in our review faced challenges with attending. The following are examples of challenges the small agencies in our review identified: According to agency officials, the meetings that were held focused on cybersecurity issues faced by large agencies. Small agencies do not face the same technical issues and may not have the same capabilities, resources, personnel, and/or expertise as larger agencies to implement necessary cybersecurity requirements. Agency officials also stated that, since smaller agencies have fewer cybersecurity staff, they may not be available to attend meetings held by DHS. An official at one agency stated that when meetings require security clearances to attend, smaller agencies are unable to attend since their staff does not have available funds or a need to obtain the necessary clearances. Agency officials also noted they were not always made aware of meetings held by OMB or DHS, including chief information security officer advisory councils, small agency meetings, and privacy workshops. During the course of our review, in December 2013, DHS established the Small & Micro-Agency Cybersecurity Support initiative. The initiative is intended to provide support to small agencies for implementing and improving cybersecurity programs. Through this initiative, DHS intends to provide IT security planning assistance and cybersecurity support to small agencies within the federal civilian executive branch. The support is focused on agencies that are attempting to enhance their cybersecurity posture but currently do not have the capabilities, resources, personnel, and/or expertise to implement necessary requirements. In January 2014, DHS held a Small & Micro-Agency Cybersecurity workshop intended to inform small agencies on the various services offered to help them implement and improve their cybersecurity programs. For this workshop DHS contacted agencies from the small agency Chief Information Security Officer (CISO) Advisory Council events. At the workshop, DHS provided a discussion of options and strategies for implementing the Trusted Internet its initiative providing support to small agencies; the Continuous Diagnostics & Mitigation program; blue teams, red teams, assessments, outcomes, and solutions; US-CERT capabilities and incident reporting procedures at federal agencies; and fiscal year 2014 and 2015 challenges. As of February 2014, five agencies were participating in a pilot program for the Small and Micro-Agency Cybersecurity Support Initiative, including two of the six agencies from our review. As DHS continues with the pilot program, developing services and guidance that address the challenges discussed in this report could further assist small agencies. For example, guidance and assistance targeted to these agencies’ environments could help them improve the implementation of their security programs and various privacy requirements. Securing information systems and protecting the privacy of personal information is a challenge for the small agencies we reviewed. Although these agencies have implemented elements of an information security program and privacy requirements, weaknesses put agencies’ information systems and the information they contain at risk of compromise. Addressing these weaknesses is essential for these agencies to protect their information and systems. Without adequate safeguards, the small agencies we reviewed will remain vulnerable to individuals and groups with malicious intentions, who may obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. Moreover, while OMB and DHS have continued to oversee agencies’ information security programs and implementation of privacy requirements and provide guidance and services, they have not consistently ensured that all small agencies have reported on their compliance with security and privacy requirements, making it more difficult to accurately assess the extent to which agencies are effectively securing their information and systems. Additionally, those agencies that were aware of the guidance and services have been challenged with using it. Without the additional assistance, oversight, and collection of security and privacy information for the selected small agencies, OMB and DHS may be unaware of the agencies’ implementation of requirements and the assistance that is needed. To improve the consistency and effectiveness of government-wide implementation of information security programs and privacy requirements at small agencies, we recommend that the Director of OMB include in the annual report to Congress on agencies’ implementation of FISMA a list of agencies that did not report on implementation of their information security programs, and information on small agencies’ implementation of privacy requirements. In addition, we recommend that the Secretary of Homeland Security, as part of the department’s Small & Micro-Agency Cybersecurity Support Initiative, develop services and guidance targeted to small and micro agencies’ environments. In a separate report with limited distribution, we are also making detailed recommendations to the selected agencies in our review to correct weaknesses identified in their information security and privacy programs. We provided a draft of this report to the six agencies selected for our review, as well as to DHS, the Office of Personnel Management, and OMB. We received written responses from DHS, the Federal Trade Commission, and the James Madison Memorial Foundation. These comments are reprinted in appendices II through IV. We received e-mail comments from OMB, the National Endowment for the Humanities, and the International Boundary Commission, United States and Canada. The other three agencies had no comments on our report. The audit liaison for OMB responded via e-mail on June 10, 2014, that OMB generally agreed with our recommendations and provided technical comments. We incorporated them as appropriate. In its written comments (reproduced in appendix II), DHS concurred with our recommendation and identified actions it has taken or plans to take to implement our recommendation. For example, as part of its fiscal year 2014 hiring plan, the National Protection and Programs Directorate’s Office of Cybersecurity and Communications is establishing and expanding a new federal customer service unit within the United States Computer Emergency Readiness Team to better understand the circumstances and needs of the various federal civilian departments and agencies, including small and micro agencies. According to DHS, the customer service unit will help develop and improve services and guidance that address the particular needs of agencies with 6,000 full- time employees or less. According to DHS, these actions will be completed by April 30, 2015. In its written comments (reproduced in app. III), the Federal Trade Commission acknowledged that improvements can be made in aspects of its information security program and described steps it has taken or plans to take to address weaknesses we identified. In its written comments (reproduced in app. IV), the James Madison Memorial Foundation reiterated that it is one of the smallest agencies in the federal government, with only three full-time employees and one half- time employee, and that it had operated since November 2010 with the understanding that the agency was granted an exemption from FISMA by OMB officials. However, the agency stated that it plans to take the necessary actions to conform to FISMA requirements. The Chief Information Officer for the National Endowment for the Humanities provided comments via e-mail on June 6, 2014. He discussed the usefulness of the report contents and noted that it was very much needed. In addition, he noted that GAO’s report highlights the lack of compliance with reporting requirements by small agencies and that these agencies may be struggling to meet all requirements. He further commented that large agencies, unlike small agencies, have dedicated IT staff and that there should not be a “one size fits all” set of requirements for all federal agencies. However, while smaller federal agencies may not have dedicated IT staff, we believe federal agencies, large or small, should perform an assessment of their risks and implement appropriate safeguards to reduce risk to an acceptable level. He also provided technical comments, which we incorporated as appropriate. The Acting Commissioner for the International Boundary Commission, United States and Canada, provided comments via e-mail on June 5, 2014. The Acting Commissioner stated that he disagreed with our statement that all computer equipment within the agencies reviewed contained classified or sensitive information. However, our report does not state this; rather, it discusses the selected agencies’ actions to implement federal information security and privacy requirements. We believe our characterization of the weaknesses identified is accurate as of the time of our review. The Deputy Chief Risk Officer for the Federal Retirement Thrift Investment Board and the audit liaisons for the Office of Personnel Management and National Capital Planning Commission responded via e-mail that these agencies did not have any comments on the draft report. We are sending copies of this report to the Secretary of Homeland Security, the Director of the Office of Management and Budget, and the heads of the six agencies we reviewed. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov, or Dr. Nabajyoti Barkakati at (202) 512-4499 or barkakatin@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Our objectives were to determine the extent to which (1) selected small agencies are implementing federal information security and privacy laws and policies, and (2) the Office of Management and Budget (OMB) and the Department of Homeland Security (DHS) are overseeing and assisting small agencies in implementing their information security and privacy programs. To assess how small agencies were implementing federal information security and privacy laws, we selected six agencies for review. We selected these six agencies by creating a list of all small, micro, and independent regulatory agencies using definitions from OMB Circular A- 11, CyberScope, the Paperwork Reduction Act, USA.gov, and Office of Personnel Management information. We used OMB’s definition of small agencies as agencies with fewer than 6,000 employees and micro agencies as agencies having fewer than 100 employees. We excluded the 24 agencies covered by the Chief Financial Officers Act, agencies that are part of the Executive Office of the President, agencies from the intelligence community, and agencies whose financial statements are audited annually by GAO. We selected the agencies by organizing the list of small agencies into five primary areas: (1) boards, commissions, and corporations reporting through CyberScope; (2) boards, commissions, and corporations not reporting through CyberScope; (3) independent regulatory agencies; (4) memorial, arts, foundations, and administrative agencies reporting through CyberScope; and (5) memorial, arts, foundations, and administrative agencies not reporting through CyberScope. Using a randomly generated number, we selected one agency from each area. The five resulting agencies were the (1) Federal Trade Commission; (2) International Boundary Commission, United States and Canada; (3) James Madison Memorial Fellowship Foundation; (4) National Capital Planning Commission; and (5) National Endowment for the Humanities. We selected the sixth agency, the Federal Retirement Thrift Investment Board, because it had experienced a significant data breach involving personally identifiable information. Due to the sensitive nature of the information discussed, throughout the report we do not refer to the six agencies by name. To identify agency, OMB, and National Institute of Standards and Technology (NIST) responsibilities for agency information security and privacy, we reviewed and analyzed the provisions of the E-Government Act of 2002, Federal Information Security Management Act (FISMA) of 2002, and the Privacy Act of 1974. At each of the six agencies, we interviewed senior information security program and privacy staff, observed controls, and conducted technical reviews to gain an understanding of the agency, the information technology environment, and the information security and privacy programs. To evaluate agencies’ implementation of their information security responsibilities, we reviewed and analyzed agency documentation and compared it to provisions in FISMA and NIST guidance. We reviewed information security policies and procedures, information technology security-related audit reports, CyberScope data (where available), and inspector general reports for work conducted in fiscal years 2011, 2012, and 2013. To evaluate the privacy programs at each agency, we assessed whether the six agencies had established plans for privacy protections and conducted impact assessments for systems containing personally identifiable information, as required by the E-Government Act. We assessed whether the six agencies had issued system of records notices for each system containing personally identifiable information, as called for by the Privacy Act. We reviewed OMB memorandum M-03-22 and NIST Special Publication 800-122 to select privacy elements required of federal agencies. We then reviewed and analyzed documents from the selected agencies, including privacy policies and procedures, to determine whether they adhered to the requirements set forth in OMB and NIST guidance. We also interviewed agency officials to determine what assistance they had requested and received from OMB and areas where it would have been beneficial to receive additional assistance. Because of the small number of agencies reviewed, our findings are not representative of any population of small agencies and our results only apply to the six selected agencies and to their selected systems. To determine the extent to which DHS and OMB are overseeing and assisting small agencies in implementing information security program requirements, we reviewed OMB’s guidance to determine the Department of Homeland Security’s responsibilities. We reviewed and analyzed DHS’s and OMB’s policies, procedures, and plans related to security to determine the level of guidance DHS provided to small federal agencies. We reviewed DHS’s and OMB’s fiscal years 2011, 2012, and 2013 guidance for agency reporting on FISMA and compared it to FISMA requirements. Additionally, we reviewed the six agencies’ fiscal years 2011 and 2012 FISMA data submissions to determine the extent to which DHS uses data to assist agencies in effectively implementing information security program requirements. We interviewed DHS officials in the Office of Cybersecurity and Communications, U.S. Computer Emergency Readiness Team (US-CERT), Federal Network Resilience Division, and other DHS entities. We reviewed and analyzed documentation that supported agency assistance requests, technical alerts, after-action reports, and other available documentation to determine the extent to which US-CERT tracks and provides assistance to small agencies. We conducted interviews with OMB officials based on the documentation and information provided. We did not evaluate the implementation of DHS’s FISMA-related responsibilities assigned to it by OMB. To evaluate the extent to which DHS and OMB are overseeing and assisting small agencies in implementing privacy laws and policies, we reviewed OMB-issued guidance on Privacy Impact Assessments and each selected agency’s privacy notices. Additionally, we reviewed DHS’s privacy guidance. We met with DHS and OMB officials to determine the actions taken to provide assistance and oversight to federal agencies. To determine the reliability and accuracy of the data, we obtained and analyzed data from each agency that addressed the security and privacy internal controls of the systems used to collect the data. Specifically, we analyzed data regarding access controls, incident reporting, security awareness training, change management, and remediation of weaknesses. In addition, we interviewed agency officials responsible for the collection and reporting of the data. Based on these procedures, we determined the data were sufficiently reliable for the purpose of this report. We conducted this performance audit from January 2013 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, the following made key contributions to this report: Edward Alexander, Jr., and Anjalique Lawrence (assistant directors), Cortland Bradford, Debra Conner, Rosanna Guerrero, Wilfred B. Holloway, Lee McCracken, David F. Plocher, Zsaroq Powe, Brian Vasquez, and Shaunyce Wallace. | Small federal agencies—generally those with 6,000 or fewer employees—are, like larger agencies, at risk from threats to information systems that support their operations and the information they contain, which can include personally identifiably information. Federal law and policy require small agencies to meet information security and privacy requirements and assign responsibilities to OMB for overseeing agencies' activities. OMB has assigned several of these duties to DHS. GAO was asked to review cybersecurity and privacy at small agencies. The objectives of this review were to determine the extent to which (1) small agencies are implementing federal information security and privacy laws and policies and (2) OMB and DHS are overseeing and assisting small agencies in implementing their information security and privacy programs. GAO selected six small agencies with varying characteristics for review; reviewed agency documents and selected systems; and interviewed agency, OMB, and DHS officials. The six small agencies GAO reviewed have made mixed progress in implementing elements of information security and privacy programs as required by the Federal Information Security Management Act of 2002, the Privacy Act of 1974, the E-Government Act of 2002, and Office of Management and Budget (OMB) guidance (see figure). *Agency 5 was not required to complete a privacy impact assessment. In a separate report for limited official use only, GAO is providing specific details on the weaknesses in the six selected agencies' implementation of information security and privacy requirements. OMB and the Department of Homeland Security (DHS) took steps to oversee and assist small agencies in implementing security and privacy requirements. For example, OMB and DHS instructed small agencies to report annually on a variety of metrics that are used to gauge implementation of information security programs and privacy requirements. In addition, OMB and DHS issued reporting guidance and provided assistance to all federal agencies on implementing security and privacy programs. However, 55 of 129 small agencies identified by OMB and DHS are not reporting on information security and privacy requirements. Further, the agencies in GAO's review have faced challenges in using the guidance and services offered. Until OMB and DHS oversee agencies' implementation of information security and privacy program requirements and provide additional assistance, small agencies will continue to face challenges in protecting their information and information systems. GAO recommends that OMB report on all small agencies' implementation of security and privacy requirements. GAO also recommends that DHS develop services and guidance targeted to small agencies' environments. GAO is making recommendations to the six agencies reviewed to address their information security and privacy weaknesses in a separate, restricted report. OMB and DHS generally concurred with the recommendations. |
Travel and transportation expenses for transferred employees, new appointees, or student trainees, including moving expenses and relocation programs, among other aspects of the relocation programs, are authorized by 5 U.S.C. §§ 5721-5739. Agencies are authorized to pay the expenses for the sale of a current employee’s residence if it is in the interest of the government. Agencies are also authorized to hire contractors to administer these services. Agencies contract with relocation management companies to manage home sale assistance. These companies either purchase or facilitate the purchase of a relocating employee’s home. This allows agencies to relocate employees quickly, without the employee facing a financial burden for maintaining a home in both the old and the newly assigned duty station. Home sale assistance can also be used to address mission critical skills occupations, which are one or more of the following: a staffing gap in which an agency has an insufficient number of individuals to complete its work or a competency gap in which an agency has individuals without the appropriate skills, abilities, or behaviors to successfully perform the work. Agencies can provide relocating employees with home sale assistance through AVO, Amended Value Sale (AVS), and Buyer Value Option (BVO). Under AVO, the relocation management company buys an employee’s home for its appraised value if it cannot be sold during a stated period of time. A specified number of appraisers determine the value of the home and the average is the appraised value. This provides the relocating employee earlier access to the equity from the former home that can be used toward a home at the new duty station. AVS allows an employee approved for AVO to find a buyer willing to pay a higher price than the appraised value of the home before an employee has accepted the appraised value offer from the relocation management company. Once the employee receives a bona fide offer, they can sell the house or if the offer falls through, then the relocation management company purchases the house for the offered price. Under BVO, the relocation management company purchases an employee’s home after a bona fide offer from a buyer has been made. According to GSA officials, appraisals, which can cost up to $3,000, are typically only conducted for BVO after the employee has been marketing the home for 6 months. The average fees for federal agencies, including VA, using the GSA contract described below in fiscal year 2015 were more than twice as much for AVO than for AVS and BVO. Specifically, the average fees were 25 percent for AVO, 11 percent for AVS, and 10 percent for BVO. Similarly, VA’s AVO fees were also more than twice as much for AVO than for AVS and BVO. The fees for each are a percentage of the sales price of the home. In fiscal year 2015, about 60 percent of homes sold via GSA’s contract were AVS or BVO and the remainder were AVO. In fiscal year 2015, about 17 percent of homes sold under VA’s home sale program were AVO and the others were AVS or BVO. GSA’s role in the employee relocation process includes issuing regulations that apply to all federal agencies, managing a contract that relocation management companies and agencies can use, and providing assistance and guidance to agencies. GSA issues the Federal Travel Regulation (regulation) which includes travel, transportation, and relocation policies, rules for relocation allowances, and agency reporting requirements to GSA. GSA has specific authority to issue regulations governing travel and transportation expenses, including relocation allowances. The regulation also outlines employee eligibility requirements, agency responsibilities (including rules for setting internal policies before authorizing relocation allowances) the timing of authorization processes, and who can authorize and approve relocation expenses. In addition, agencies are required to report relocation activities to GSA if they spend more than $5 million a year on travel and transportation, including relocation expenses. Ultimately, however, GSA officials stated that GSA does not have enforcement authority over agency compliance with the regulation and can only issue non-binding regulation guidance. According to GSA officials, GSA works with industry experts and agency representatives to develop a contract for home sale assistance that agencies can use to work with relocation management companies to provide home sale assistance to employees. The contract includes vendor requirements such as a statement of work. Within the confines of the contract, agencies can tailor relocation assistance requirements to fit their needs. GSA also provides guidance and assistance that is available to all agencies in three ways, according to GSA officials: (1) GSA hosts bi-monthly agency teleconferences, (2) GSA hosts an annual forum, and (3) GSA provides one-on-one assistance to agencies. In addition, according to Office of Personnel Management officials, the Office of Personnel Management plays a relatively minor role in home sales and federal agencies are not required to report to the Office of Personnel Management on home sales and their use of related relocations. The Office of Personnel Management has a review and oversight role of agencies offering relocation programs if federal guidelines are not followed, and Office of Personnel Management officials stated that they had not seen documentation of use of AVO in their reviews of agencies’ personnel files. VA has a process both for approving the use of AVO and for employees’ participation. In late 2016, VA clarified the AVO approval process by stating that approval must be obtained before initiating recruitment efforts. VA requires a written justification for offering AVO in a job announcement. The justification must be based on the critical need for the position and difficulty in recruiting for the position without offering AVO, substantiated by recent unsuccessful recruitment efforts. This is a new policy since 2015, according to VA officials. The decision to use AVO is to be made by the hiring manager in consultation with the human resource specialist. The human resource specialist is to provide consultation to help determine whether the position is designated as difficult to fill or will meet a critical need. The job opportunity announcement is to clearly state whether AVO is or is not authorized. In addition, multiple employees are responsible for making sure that the approval process is correctly implemented, including the hiring official at the employee’s new post, the human resources office, and the assigned approving officials. There is also a process for employees’ participation. Employees authorized to use AVO are required to participate in home sale counseling provided by the relocation contractor and cannot list their home until their travel authorization has been approved. According to VA officials, counseling includes asking employees a series of questions to determine if their home is eligible for participation, such as whether the home is the employee’s current residence. Employees are required to also list their homes for sale within 90 days of initiation with the relocation services contractor. After the relocation contractor provides the appraised value of the home, employees have 60 calendar days to either decline or accept if an offer is not made by an outside buyer. The employee is also required to meet marketing and inspection requirements to accept the appraised value offer. In addition, the regulations require a service agreement that specifies the obligated service period after relocation for which the employee must serve in the government in order to avoid incurring a debt to the government. If a service agreement is violated (other than for reasons beyond the employee’s control and which must be accepted by the agency), the employee would be required to reimburse all costs that the agency had paid toward relocation expenses, including withholding tax allowance and relocation income tax allowance. As shown in figure 1, between fiscal years 2012 and 2016 federal agencies’ spending on AVO, which includes VA, and the number of homes bought through GSA’s contract differed. According to GSA officials, about 80 percent of all federal agencies’ home sale transactions, which includes VA, are done through GSA contract. GSA officials said that the variation from fiscal year 2012 to 2016 was a result of changing agency relocation needs from year to year to meet mission requirements, fluctuating real estate markets, and the location and value of the homes. As shown in figure 2, VA’s spending on AVO between fiscal years 2012 and 2016 also varied. It dropped from a high of over $3.5 million and 51 home sale transactions in fiscal year 2014 to a low of about $80,000 and 1 home sale transaction in fiscal year 2016. VA officials stated more was spent on AVO in fiscal year 2014 because the fees for AVO were higher that year and home sale prices increased as real estate markets recovered. The sharp decline in the VA home sale count and expenditures in fiscal year 2016 is due to VA’s suspension of AVO in October 2015 after the VA Inspector General investigation. VA’s fiscal year 2016 appropriations prohibited, among other things, the use of funds for AVO for Senior Executive Service employees unless certain conditions were met, a waiver from the Secretary was obtained, and Congress was notified within 15 days. The one employee for whom VA used fiscal year 2016 funds was not in the Senior Executive Service. Thus the statutory prohibition was not applicable. Most of the 20 agencies with an operational AVO that completed the questionnaire we sent them reported they rely on AVO policies that include two types of internal controls. An internal control is a process affected by an entity’s oversight body, management, and other personnel that provides reasonable assurance the entity’s objective will be achieved. In the context of AVO, policies that include two types of internal controls are critical. First, actions built directly into operational processes to support the entity in achieving its objectives and addressing related risks are transaction control activities. For example, 18 of the 20 agencies reported the AVO approval process must be complete before payments are made. In addition, 17 of the 20 agencies reported the approval process for AVO is included in the agency’s written policies. Second, assessing and responding to misconduct risks includes considering how misuse of authority or position can be used for personal gain. For example, 19 of the 20 agencies reported their AVO had safeguards to prevent AVO from being used for the personal gain of employees. An agency could strengthen the approval process for its permanent change of station program by requiring an independent review to ensure moves and expenses are appropriate and justified. While the 20 agencies with an operational AVO that completed our questionnaire reported they had not examined whether AVO improved recruitment or retention of staff during fiscal years 2012 to 2016, 12 of the 20 agencies anecdotally provided examples of how AVO has been beneficial. For example, 4 agencies reported AVO minimized the financial risks or burdens of employees who are relocating, such as not having two mortgages. Four other agencies reported AVO assisted them in recruiting the most qualified employees or assisted them in recruiting and retaining employees for hard-to-fill positions. Four agencies reported AVO assists in filling positions in rural areas or areas with depressed real estate markets. In addition, 7 of the 20 agencies with an operational AVO stated they use AVO for mission critical skills, such as medical officers, engineers, and courthouse protection positions. Fourteen of the 20 agencies with an operational AVO reported GSA had provided assistance or guidance to them. Two of the 14 agencies also reported additional assistance from GSA would be helpful. One agency reported it would like training for individuals who administer AVO and another agency reported it would like assistance on negotiating lower fees. In addition, 2 agencies with an operational AVO described the following practices they implemented based on lessons learned from their administration of AVO. One agency stated that providing pre-clearance for employees to participate in AVO can save the agency time initiating AVO. This agency started using a pre-clearance form that asks employees questions to ensure they meet basic eligibility criteria, for example whether or not the house is under foreclosure or has a lien on it. If the house does not qualify, the agency is spared the time spent initiating AVO. The agency has not quantitatively tracked the effect of this pre- clearance, but stated that it found it helpful. The agency plans to look for ways to improve the pre-clearance form. Another agency stated employees need coaxing to find buyers for their homes and depend on AVO to avoid carrying two mortgages. This agency instituted an optional program that provides relocating employees with housing allowances for their move as well as an increased bonus for selling the home to an outside buyer, if the employee keeps the home on the market after the AVO offer is provided. The agency plans to continue developing more effective communication for employees to understand relocation assistance and promote AVS. This pilot program was approved by GSA. GSA officials told us VA or another agency could apply to implement a similar but not identical pilot program that is unique in order to determine if there are similar benefits or cost savings which are in the interest of the government. However, according to GSA officials, after a pilot program is determined to be successful, GSA’s Office of Government-wide Policy could choose to draft a legislative proposal to Congress, requesting to statutorily permit other agencies to implement the same program. In interviews with GSA officials, they noted the following good practices which they believe agencies should incorporate into their AVO. These are based on six lessons learned in GSA’s role issuing regulations, managing the contract that agencies can use, and providing assistance and guidance as follows: When mission allows, agencies should implement the more cost- effective BVO home sale assistance before referring a home to a more expensive option, such as AVO. Pre-decision counseling helps minimizes the number of employees who start the home sale process and then drop out. Agencies should cap the home listing price at no more than 110 percent of the appraised value. Houses priced too high will have few interested buyers and will stay on the market longer, thus increasing an agency’s costs. A relocating employee should start working with the agency’s relocation management company early in the home sale process rather than after the employee has been unable to sell the home. Agencies increase their potential for more cost-effective home sale transactions when homes are marketed effectively from the outset. Agencies can reduce service fees by requiring use of the relocation management company network real estate agents when they list the house. The network real estate agent will then pay the relocation management company a referral fee which will result in lower costs for the agency. Regular meetings with relocation management companies to review the status of each transferee keep agencies apprised of what the agency can do to encourage transferees to be more engaged in selling their homes. This results in higher sales and lower contractor fees. We examined the extent to which VA’s AVO included the good practices based on lessons learned from GSA. We found that VA’s AVO included all of these practices. For example, VA offers pre-decision counseling and VA employees work with the relocation company before their home is put on the market. In addition, before participation in AVO, VA asks the employee questions to ensure the home to be sold meets basic criteria. VA conducted two recent reviews that had recommendations related to AVO. According to VA officials, the two reviews resulted in VA updating its AVO approval process and adding the updated process to VA’s human resources handbook on aids to recruitment. VA also updated its financial policy in December 2016 to include an annual review of historical data related to VA’s home sale program that will include examining homes sale transaction costs and median home sale values. As shown in table 1, VA implemented or closed all of the review’s recommendations related to AVO. In addition, VA implemented new AVO policies that include internal controls since fiscal year 2016 when VA suspended AVO, as shown in table 2. VA’s approval process for AVO is a case-by-case approval granted by different officials for Senior Executive Service and non-Senior Executive Service employees. For Senior Executive Service employees, the policy is now that a secretarial waiver is needed and Congress is notified of the need to fill the position. The Senior Executive Service-waiver provision and congressional notification requirement were enacted in VA’s fiscal year 2016 appropriation as applicable to funds appropriated for that act for employees of the department in a senior executive position participating in the Home Marketing Incentive Program or AVO. However, there is no current statutory mandate for the VA’s policy regarding the Senior Executive Service-waiver and congressional notification requirement. For non- Senior Executive Service employees, under secretaries, assistant secretaries, and other key officials serve as the approving officials. VA officials stated that as a result of the Inspector General’s 2015 report, they identified a need for additional training for human resources officials on relocation and recruitment, including AVO. VA officials told us they developed a training module on relocation and recruitment. A webinar using the module was conducted in March of 2017, according to VA officials under VA’s policy. VA collects some data on the use of AVO, including how much is spent and the number of completed AVO transactions. VA also collects data on whether the employees who used AVO were in the Senior Executive Service and on the employees’ occupational codes. For example, VA reported that it had 38 completed AVO transactions in fiscal year 2015, 9 of which were Senior Executive Service. We compared the occupational codes that VA identified for each of the 38 completed AVO transactions to a list of VA’s mission critical occupations that VA provided. Our analysis found that 10 of the 38 completed AVO transactions were for mission critical occupations, three of which were Senior Executive Service employees. The three Senior Executive Service employees were in three occupational codes: medical officer, contracting, and nurse. We also found that an additional 10 of the 38 completed AVO transactions were for core mission workers, which VA stated are occupations that perform the core work of an organization, but these occupations are not on the VA mission critical occupations list. These employees were in two occupational codes: program management and management and program analysis. The remaining 18 completed AVO transactions were in seven different occupational codes, which included health system administration, social science, and realty. However, VA is not tracking data on whether AVO improves recruitment and retention of employees. VA officials stated AVO has been most beneficial for the recruitment and retention of hard-to-fill Senior Executive Service positions, including positions in locations that were rural, had a high cost of living, or had physician or nursing shortages. A position could also be hard to fill because of turnover trends and availability of qualified talent. In addition, VA officials stated that a position’s classification as a mission critical skills occupation is one factor VA uses in determining whether or not AVO should be offered and that they used AVO as an incentive to move to other mission critical positions within the agency. However, VA is not tracking the data that would help it determine whether the use of AVO is improving recruitment and retention of employees specifically in hard-to-fill Senior Executive Service positions or mission critical skills occupations. Federal internal control standards suggest management should obtain reliable data that can be used for effective monitoring. It is also important to establish the necessary data to track a program’s effectiveness and to establish a baseline to measure the changes over time to assess the program in the future. In addition, reliable data are crucial for VA to manage its resources effectively. We have previously reported that flat or declining budgets will continue to necessitate workforce adjustments across government. However, VA stated it is not tracking data on whether the use of AVO improves recruitment and retention of employees because it does not have the resources or capabilities to do so. As VA continues to seek ways to address recruitment and retention challenges, collecting such data could be useful in identifying trends and options for targeting certain occupations or skill sets that may improve the agency’s use of home sales to support relocation. Without tracking these data, VA will be unable to determine whether the use of AVO is improving recruitment and retention. Employee relocation, including home sale assistance, can help agencies position skilled employees optimally and recruit and retain employees. VA’s Inspector General found instances of officials misusing AVO to relocate for their personal benefit rather than in the interest of the government. VA has taken actions to strengthen AVO’s internal controls, in part due to the Inspector General’s report. VA believes that using AVO is beneficial specifically for hard-to-fill Senior Executive Service positions and uses AVO as an incentive for mission critical skills occupations. However, VA does not track data that can help it determine whether use of AVO is improving retention and recruitment for these positions. As VA continues to seek ways to address recruitment and retention challenges, such data could be useful in identifying trends and options for targeting certain occupations or skill sets that may improve the agency’s use of home sales to support relocation. Without tracking these data, VA will be unable to determine whether the use of AVO has improved recruitment and retention. We recommend the Secretary of Veterans Affairs should track data that can help VA determine whether AVO improves recruitment and retention. We provided a draft of this report for review and comment to the Secretary of VA and the Acting Administrator of GSA. In its written comments, which are reproduced in appendix III, VA concurred with our recommendation and said it is working to improve reporting capabilities that will be beneficial in analyzing AVO data. GSA did not comment on the findings. VA and GSA also provided technical comments, which we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of VA, the Acting Administrator of GSA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or lucasjudyj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The objectives of this engagement were to review the administration of Appraised Value Offer (AVO) at the Department of Veterans Affairs (VA) and government-wide. Specifically, this report (1) describes federal agencies’ and VA’s use of AVO; (2) describes federal agencies’ key AVO internal controls, evaluations of whether AVO improved recruitment and retention of employees, and lessons learned; and (3) analyzes the extent to which VA has implemented additional internal controls for AVO since 2015 and has evaluated whether the use of AVO improved the recruitment and retention of employees. To address our objectives, we reviewed federal statutes and regulations related to relocation programs, conducted a literature review, and reviewed our prior work on relocation and mission-critical skills. We reviewed Title 5 of the U.S. Code related to relocation, including agency authority, roles, and responsibilities in administering AVO. We also reviewed the Federal Travel Regulation at Title 41 of the Code of Federal Regulations, and VA’s appropriations from fiscal years 2015 to 2017. We also conducted a literature review to find reports and articles about VA and federal use of AVO. We reviewed relevant documents from the General Services Administration (GSA) and Office of Personnel Management and interviewed officials from these agencies about their roles in agency use of relocation programs generally and on AVO specifically. We reviewed GSA’s guidance on agency relocation programs. We also interviewed GSA officials about their role managing the contract with relocation management companies that federal agencies can use and about providing agencies guidance and assistance in administering relocation programs. In addition, we interviewed Office of Personnel Management officials about their review and oversight role for agencies offering relocation programs. To describe how federal agencies and VA use AVO, we reviewed documents from VA and GSA and interviewed VA and GSA officials. We reviewed data on AVO transactions completed through GSA’s contract, which includes VA, in fiscal years 2012 to 2016. According to GSA officials, about 80 percent of federal agencies’ home sale relocation transactions occur through GSA’s contract with relocation management companies. GSA stated that the number of agencies that use its contract for home sales can differ from year to year. In addition, we reviewed VA’s data on completed AVO transactions in fiscal years 2012 to 2016. To assess the reliability of the GSA and VA data on completed AVO transactions, we interviewed GSA and VA officials and reviewed related documentation. We determined that the data were sufficiently reliable for the purposes of our objectives. To describe federal agencies’ key AVO internal controls, evaluations of effectiveness, and lessons learned, we developed a questionnaire. The questionnaire is reprinted in appendix II. To develop the internal controls section of the questionnaire (question 4), we used relevant federal internal control standards and the internal control weaknesses in the administration of relocation programs identified in the VA Inspector General’s 2015 report on misuse of relocation program funds. In addition, we reviewed other agencies’ inspector general reports on weaknesses in the administration of their relocation programs to identify key internal controls that would be relevant to the AVO process. We created a list of key controls relevant to AVO and asked the agencies to identify which internal controls they were using. We modified the list in response to feedback from pretests of our questionnaire. After we drafted the questionnaire, we conducted pre-tests on the phone with two officials from agencies that had used AVO but did not use GSA’s contract with relocation management companies, as well as an official from GSA who was familiar with how agencies manage their AVO utilizing the contract. We conducted these tests with officials familiar with the AVO process to check that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on agency officials, (4) the information could feasibly be obtained, and (5) the questionnaire was comprehensive and unbiased. We made changes to the content of the questionnaire after the three pre- tests, based on the feedback we received. We distributed the questionnaire we developed via email to the 28 agencies or components of agencies with completed home sale transactions through GSA’s contract in fiscal year 2015 or 2016. We did not include VA when distributing the questionnaire. We selected this set of agencies for distribution of the questionnaire to remain consistent with our reporting of federal agencies’ spending on AVO through GSA’s contract. We emailed the questionnaire to recipients as a Word attachment on January 9, 2017. We sent reminder emails to and called non-respondents. We also emailed secondary points of contact where available at non-responsive agencies. We closed the questionnaire on March 10, 2017. Twenty-four of 28 agencies completed the questionnaire, 20 of which had an operational AVO, which we interpreted to mean that AVO was being offered at the agency. Thus, we report on the 20 agencies’ responses to the questionnaire. We characterize the responses to the questionnaire as “most” when 12 to 19 agencies responded the same way. All questionnaire data were double key-entered into an electronic file in batches and were 100 percent verified. All data in the electronic file were verified again for completeness and accuracy. To assess the extent to which VA has implemented additional internal controls since 2015 and has evaluated whether the use of AVO has improved the recruitment and retention of employees, we analyzed documents from VA and interviewed VA officials. We assessed VA’s controls and evaluations using federal internal control standards. We reviewed VA human resources and financial policy documents about the administration of AVO with a focus on what changes had been made since fiscal year 2015. We interviewed VA officials who administer AVO about these changes and additional changes that are planned. We also reviewed the 2015 VA Inspector General report on relocation programs and a 2016 review of VA’s Permanent Change of Station program. We interviewed an official at the VA Inspector General’s office and other VA officials about the status of the recommendations. We conducted this performance audit from August 2016 to September 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. 1. Appraised Value Offer (AVO) programs purchase a relocating employee’s home based on the appraised value if the employee’s home is not sold within a specified time period determined by an agency. 2. Mission critical skills occupations are one or more of the following: staffing gap in which an agency has an insufficient number of individuals to complete its work; and/or a competency gap in which an agency has individuals without the appropriate skills, abilities, or behaviors to successfully perform the work. 3. Lessons learned are knowledge gained by both positive and negative experiences that, if applied, could result in a change. 1. Does your agency have an AVO program that is currently operational? No IF “NO”, PLEASE SKIP TO section 5, question 17 2. Does your agency use the AVO program as a recruitment or retention incentive for mission critical skills occupations, among others? No IF “NO”, PLEASE SKIP to question 4 3. Please provide two examples of mission critical skills occupations for which your agency has used the AVO program as a recruitment or retention incentive. 4. Does your agency have the following policies for its AVO program? (If need be, please review your agency’s policies.) 5. What process or policy changes, if any, has your agency made to its AVO program in fiscal year 2015 or after? Please describe. 6. Who at your agency approves the decision to offer the AVO program as a recruitment or retention incentive? Please provide a position, not the name of an individual, and the person’s office. For example, Chief of Relocation Incentives, Human Resources. 7. Has your agency examined whether the AVO program improved recruiting or retaining staff at any time during the fiscal years of 2012- 2016? Yes IF “YES”, PLEASE email us any documentation that your agency has on whether the AVO program improved recruiting or retaining staff, if possible No IF “NO”, PLEASE SKIP to question 9 8. Did your agency find that the AVO program improved recruiting or retaining staff? 9. For what uses has the AVO program been most beneficial (for example, in certain locations or occupations)? 10. Has your agency identified any lessons learned that could be applied to your agency’s AVO program? (Lessons learned are knowledge gained by both positive and negative experiences that, if applied, could result in a change.) Yes IF “YES”, PLEASE email us any documentation that your agency has on any lessons learned, if possible No IF “NO”, PLEASE SKIP to question 14 11. Please describe any lessons learned your agency has identified. 12. What actions, if any, is your agency planning to take in response to the lessons learned? 13. What actions, if any, has your agency taken in response to the lessons learned? 14. Has GSA provided your agency with assistance or guidance for your AVO program? (Assistance is customized for your agency’s needs, for example a phone call or an email in response to a question. Guidance is standardized and available to all agencies, for example through websites or conferences.) 15. What additional GSA assistance, if any, would be helpful for your agency to administer its AVO program? 16. What additional GSA guidance, if any, would be helpful for your agency to administer its AVO program? In addition to the contact named above, Signora May (Assistant Director), Maya Chakko, Jehan Chase, Ellen Grady, Gina Hoover, Jessica Mausner, Cindy Saunders, Robert Robinson, and Erik Shive made key contributions to this report. | Employee relocation is a critical tool to help agencies position skilled employees optimally and for workforce recruitment, retention, and development. Agencies can facilitate the sale of a relocating employee's home when the relocation of a specific employee to a different location is in the interest of the government. After a 2015 VA Inspector General report found that two VA employees abused AVO to relocate for their personal benefit, VA suspended AVO in October 2015 and reinstated it in fiscal year 2017. GAO was asked to review the administration of AVO at VA and government-wide. This report (1) describes federal agencies' and VA's use of AVO; (2) describes federal agencies' key AVO internal controls, evaluations, and lessons learned; and (3) analyzes the extent to which VA has implemented additional internal controls since 2015 for AVO and has evaluated whether AVO improved recruitment and retention. GAO analyzed agency documents and interviewed VA and GSA officials. GAO also distributed a questionnaire to 28 agencies or their components that had completed home sale transactions through GSA's contract in fiscal years 2015 or 2016. Twenty of these agencies responded that they had an operational AVO and provided information on the types of controls they use and any lessons learned. About 80 percent of federal agencies' home sale transactions to support employee relocations are through the contract that the General Services Administration (GSA) manages with relocation management companies. To support relocations, agencies can use an Appraised Value Offer (AVO). Under an AVO, the relocation management company buys a relocating employee's home for its appraised value if it cannot be sold during a stated period of time. From fiscal years 2012 to 2016, use of AVO varied for federal agencies, including the Department of Veterans Affairs (VA). For example, in fiscal year 2012, the federal agencies that used GSA's contract spent over $66 million on 936 homes and in fiscal year 2016 they spent over $42 million on 601 homes. In response to GAO's questionnaire (which was not sent to VA), most of the 20 agencies that were using AVO identified the following two types of critical internal controls as part of their AVO policies. First are transaction control activities, which are actions built directly into operational processes to support the entity in achieving its objectives and addressing related risks. For example, 18 agencies reported that the AVO approval process must be complete before payments are made. Second is assessing and responding to misconduct risks by considering how misuse of authority or position can be used for personal gain. For example, 19 agencies reported that their AVO had safeguards to prevent it from being used for the personal gain of employees. An agency could require an independent review of its permanent change of station program. While none of the 20 agencies reported they had evaluated whether AVO improved recruitment and retention of employees, 12 of the 20 agencies provided examples of how AVO had been beneficial. For example, four agencies noted the use of AVO had helped them recruit the most qualified employees or assisted with hard-to-fill positions. GSA officials also identified six good practices based on lessons learned from their role, which includes managing the relocation contract that they believe agencies should incorporate into their AVO. When GAO compared these good practices to VA's AVO process, it found that all of these good practices had been adopted by VA. For example, VA offers pre-decision counseling and VA employees work with the relocation company before their home is put on the market. Since fiscal year 2016, VA has strengthened the administration of AVO by implementing new policies that include internal controls, but does not track data on whether AVO improves recruitment and retention. For example, VA revised its policies to require approval prior to initiating recruitment efforts and that a relocating employee's participation cannot be approved by the employee's subordinates. VA officials stated AVO is beneficial for hard-to-fill Senior Executive Service positions and for mission critical skills occupations, however, VA does not track data to determine whether AVO improves the recruitment and retention of employees. VA officials stated the agency does not have the resources or capabilities to track such data. These data could be useful in identifying trends and options for targeting certain occupations or skill sets that may improve the agency's use of home sales to support relocation. Without tracking these data, VA will be unable to determine whether AVO has improved recruitment and retention. GAO recommends that VA track data to determine whether AVO improves recruitment and retention. VA concurred with the recommendation. |
Funding requests for IRS are organized by appropriation account, which aligns broadly with its strategic goals to (1) deliver high-quality and timely service to reduce taxpayer burden, and encourage voluntary compliance; and (2) effectively enforce the law to ensure compliance with tax responsibilities and combat fraud. IRS funds its IT investments from its Operations Support and Business Systems Modernization appropriation accounts. IRS’s four appropriation accounts and the fiscal year 2016 appropriations are as follows: Enforcement ($4.86 billion): Funds activities such as determining and collecting owed taxes, providing legal and litigation support, and conducting criminal investigations. Operations Support ($3.75 billion): Funds activities including rent and facilities expenses, IRS-wide administration activities, and IT maintenance and security. Taxpayer Services ($2.33 billion): Funds taxpayer service activities and programs, including prefiling assistance and education, filing and account services, and taxpayer advocacy services. Business Systems Modernization ($290 million): Funds the planning and capital asset acquisition of IT to modernize IRS business systems. In support of the President’s budget request, agencies submit CJs to Congress to explain the request by outlining agency goals and objectives for the coming fiscal year, and providing detailed descriptions of activities at the program, project, and activity level. Agencies are to prepare the justifications in accordance with the Office of Management and Budget’s (OMB) Circular A-11 which provides guidance on materials required for the agency’s request and reflects the needs of the Congress for providing effective oversight. Since 2014, IRS has undertaken a multiyear effort to develop a vision for the future state of tax administration to fulfill its mission more efficiently and effectively. To focus this effort, IRS narrowed 19 existing objectives to a core set of objectives that were used to develop six future state themes: 1. Facilitate voluntary compliance by empowering taxpayers with secure innovative tools and support. 2. Understand noncompliant taxpayer behavior and develop approaches to deter and change it. 3. Leverage and collaborate with external stakeholders. 4. Cultivate a well-equipped, diverse, skilled, and flexible workforce. 5. Select highest value work, using data analytics and a robust feedback loop. 6. Drive more agility, efficiency, and effectiveness in IRS operations. These future state themes are in addition to Treasury’s department-wide focus on strengthening cybersecurity and eliminating identity theft. IRS reported that it adopted a new, more strategic approach to identify and select budget program priorities based on the future state themes. A user fee is charged to beneficiaries of certain goods or services provided by the federal government. In general, a user fee is related to a voluntary transaction or request for government goods or services above and beyond what is normally provided to the public. Although IRS services and operations are primarily funded through annual appropriations, IRS has the authority to supplement its appropriations with other resources, such as user fees. Until 1995, IRS user fee collections were deposited into the Treasury’s general fund. In 1995, Congress granted IRS authority to retain and obligate up to $119 million in user fee revenue to supplement its annual appropriation. In 2005, Congress removed the limit of $119 million and IRS was permitted to retain and obligate user fees that were implemented after September 30, 1994, or the portion of the fee that has been increased since September 30, 1994, for those fees that existed prior to that date. For example, fees for installment agreements—monthly payment plans for taxes owed— were established after September 30, 1994, and therefore IRS retains the full amount of the fee collected. However, the fee for enrolling as an actuary is divided between IRS and the general fund of the Treasury because this user fee existed prior to September 30, 1994. In fiscal year 2016, IRS expects to collect about $422 million in user fee revenue from sources such as installment agreements (about $155 million) and income verification express services (about $51 million). IRS deposits user fees that it is authorized to retain into its Miscellaneous Retained Fees Fund—an estimated $411 million in fiscal year 2016— before transferring funds to an appropriation account to be obligated. For fiscal year 2016, planned user fee obligations ($509 million) account for about 4 percent of IRS’s total obligations ($12,374 million). IRS’s user fee funds are available until expended (no-year funds) and funds that are not obligated in the fiscal year in which they are collected are carried over to the next fiscal year. IT comprises a significant portion of IRS’s budget and plays a critical role in enabling IRS to carry out its mission and responsibilities. IRS’s fiscal year 2016 appropriations include about $2.5 billion for IT investments; this represents 20 percent of the total IRS budget. IRS relies on IT systems to process tax returns, account for tax revenues collected, send bills for taxes owed, issue refunds, assist in the selection of tax returns for audit, and provide telecommunications services for all business activities, including providing taxpayers with toll-free access to tax information, among other things. IRS’s fiscal year 2016 appropriations increased by $290 million. IRS is required by law to allocate these funds across three areas: customer service representative level of service, cybersecurity, and identity theft prevention. IRS plans to use this funding to invest in (1) increased telephone level of service, including reduced wait times and improved performance on IRS’s Taxpayer Protection Program/Identity Theft Toll Free Line; (2) cybersecurity, including network security improvements, protection from unauthorized access, and enhanced insider threat detection; and (3) identity theft refund fraud prevention. As shown in table 1, cybersecurity was allocated almost one-third of the funding, solely from the Operations Support appropriation account. This funding includes $7 million (50 additional full-time equivalents) to maintain the cybersecurity workforce. Cybersecurity efforts are intended to protect taxpayer information and IRS’s systems, services, and data from internal and external cyber- related threats. Cybersecurity funding increased by 58 percent from fiscal years 2015 to 2016, primarily from increased appropriations as shown in figure 1. The President’s fiscal year 2017 budget also requests cybersecurity funds provided through a Treasury Cybersecurity Enhancement Account, which is intended to bolster Treasury’s overall cybersecurity posture. The request includes $62 million for IRS, including $54.7 million to directly support IRS cybersecurity efforts by securing data, improving continuous monitoring, and other initiatives. IRS’s Senior Executive Team recognized that the 19 strategic objectives listed in the 2014-2017 IRS Strategic Plan were too broad a set of priorities for IRS future state vision. In January 2015, the Senior Executive Team agreed on six enterprise themes to support the future state vision that aligned with a subset of the strategic objectives and were informed by the needs of the business units. IRS modified its approach to prioritizing programs and initiatives for requested funding increases in fiscal year 2017. In January 2015, the Office of Planning, Programming and Audit Oversight asked the operating divisions to submit program increase proposals they believed necessary for IRS to achieve its priorities. This office reviewed the proposals to ensure they aligned with IRS’s strategic plan and submitted them to the Senior Executive Team for consideration. The Senior Executive Team prioritized the proposed program increases through a voting process to ensure that they aligned with IRS’s strategy and resource needs. According to IRS officials, funding increases were requested for fewer programs as a result of this new approach. Specifically, in the fiscal year 2017 CJ, increases were requested for 14 programs, whereas in the fiscal year 2016 CJ, increases in 25 programs were requested. In its fiscal year 2017 CJ, IRS explained how requests for increased funding were linked to appropriations accounts, but it did not provide this information for the amount requested to maintain current funding levels. IRS linked each requested program increase to a future state theme and included details on how much of the requested increase would be funded by each of the four appropriation accounts. Figure 2 shows each of the 14 program increase requests organized by theme or focus area, with funding requested broken out by appropriation account. Including data on the appropriation account provides additional transparency and improves the quality of the information available to Congress for budget deliberations. However, IRS did not provide data on how much it is currently spending in support of each theme. As a result, it is unclear what amount of funding would be required to maintain current levels by theme. According to officials, IRS is working to develop such data, but officials cited technical challenges with data availability and comparability as well as challenges identifying spending for specific themes, some of which are worded broadly. OMB Circular A-11 requires that an agency prepare justifications in concise, specific terms and cover all programs and activities of the agency. Additionally, the guidance specifies that an agency should consult with relevant congressional appropriations committees to confirm their support for modifications to the CJ’s format. In adopting a new approach by prioritizing a subset of objectives, IRS modified how its budget data were organized, but did not clarify how spending by themes relates to appropriation accounts. Congressional appropriations’ staff from both the majority and minority with whom we spoke told us they wanted more information on base spending by theme and account. Such information is important to ensure transparency on the current funding levels to assist Congress in making informed budget decisions. IRS has permanent, indefinite authority to obligate user fee collections. This authority allows the agency flexibility in the use of these funds. While IRS does not need congressional approval of its user fee spend plan, it must obtain approval from Treasury and OMB. Additionally, for fiscal year 2016, IRS was directed to wait 30 days following the submission of the user fee spend plan to Congress before obligating these funds. IRS’s Chief Financial Officer has oversight responsibilities for the initial assessment, updates, collection, and review of user fees. While the Chief Financial Officer does not provide the services for which user fees are charged, the office is responsible for ensuring that user fees are appropriately collected, deposited, and reported. As seen in table 2, IRS plans to allocate $509 million of user fee revenues in fiscal year 2016 across three appropriation accounts. This represents 4 percent of IRS’s total obligations in fiscal year 2016 ($12.37 billion). IRS allocates user fee revenues as part of its budget execution process and began planning for fiscal year 2016 allocations in April 2015. IRS allocates user fee revenue to fund agency priorities for which other funding was unavailable, but does not generally consider the source of funds when making these decisions. Budget officials briefed members of the Senior Executive Team and the IRS Commissioner multiple times between September 2015 and January 2016, including on preliminary estimates of user fee spending. According to IRS officials, IRS sends a draft of the user fee spend plan to Treasury within 30 days of its budget being enacted and finalizes the plan within 60 days. However, IRS began fiscal year 2016 operating under a continuing resolution until December, so the user fee spend plan was submitted to Congress in February 2016. In May 2016, IRS announced that it is revising a number of existing user fees to more closely match the cost of providing the service and implementing new user fees for some additional services as a result of its 2015 Biennial Fee Review. Agencies are required to review, on a biennial basis, the fees, royalties, rents, and other charges for services and things of value and make recommendations on revising those charges to reflect costs incurred. IRS expects total annual user fee revenue to increase by $128 million when the fees are fully implemented. Officials said they plan to continue their current policy regarding how user fee revenue is allocated. According to IRS officials, the planning process for allocating user fee revenue has been consistent between fiscal years 2011 and 2016. However, according to IRS officials, changes in appropriation levels and the cost of implementing mandates has resulted in a shift in how user fee revenue has been allocated. As shown in table 3 and the sidebar, both the amount and allocation of user fee funds shifted between fiscal years 2011 and 2016. As we reported in June 2015, IRS management decided to allocate more user fee funds to Operations Support in fiscal year 2015, in part because of changes in the amount appropriated to its accounts and the cost of implementing mandates, such as the Patient Protection and Affordable Care Act (PPACA), which is largely funded by user fee revenue and Operations Support funds. Of the $1.6 billion spent on PPACA implementation between fiscal years 2010 and 2015, $465 million was user fee revenue (29 percent) and $467 million was annually appropriated Operations Support funds (29 percent). The fiscal year 2015 appropriation for Operations Support was $161 million (4.2 percent) less than fiscal year 2014, while Taxpayer Services was not reduced during that time frame. In fiscal year 2015, IRS obligated $210 million in user fee revenue and $154 million from the Operations Support account for PPACA implementation. In addition to changes in the allocation of user fee funds across appropriation accounts, IRS has also changed the amount it retains and the amount it carries over to the next fiscal year. The amount of user fee revenue that IRS collected and retained increased from $324 million in fiscal year 2011 to $391 million in fiscal year 2015. As previously mentioned, IRS is implementing changes to user fees which it expects to generate an additional $128 million annually, all of which IRS is authorized to retain and spend. In fiscal year 2011, user fee obligations ($285 million) accounted for 2.2 percent of IRS’s total obligations ($12,777 million). For fiscal year 2016, planned user fee obligations ($509 million) account for about 4 percent of IRS’s total obligations ($12,374 million). Carryover Balances IRS can carry over any unexpended fee collections—those funds left over after IRS transfers fee collections to supplement its appropriations—for use in subsequent years. We have suggested that carryovers are one way agencies can establish reserves to sustain operations in the event of a sharp downturn in user fee collections or other events. See GAO-08-386SP for additional information on user fee design. These changes in the amount of funds that IRS retains and obligates have also affected the amount it carries over from one fiscal year to the next. As seen in table 4, IRS’s carryover balance has declined in recent years from about $327 million at the end of fiscal year 2011 to about $93 million planned for the end of fiscal year 2016. This is potentially significant because, as we have previously reported, carryover balances can help agencies to sustain operations in the event of a sharp downturn in user fee collections or other events (see sidebar). However, this is less of consideration for programs that could also be funded through annual appropriations, as is the case with IRS. In briefings to the Senior Executive Team and to the IRS Commissioner, officials identified the low carryover balance from fiscal year 2016 to 2017 as a key risk because it decreases the funds available for future fiscal years. As part of budget deliberations, officials considered the tradeoffs between spending funds on priorities in the current budget year and maintaining a reserve for future years. IT is a significant portion—about 21 percent—of the total IRS budget request for fiscal year 2017. The President requested $2.8 billion for IRS’s IT investments in fiscal year 2017, an increase of about 15 percent. This includes a $391 million (65 percent) increase for non-major IT investments and a $33 million (2 percent) decrease for major IT investments as shown in figure 3. IT investments are funded through the Operations Support and the Business Systems Modernization appropriation accounts and user fees. These investments generally support (1) day-to-day operations (which include operations and maintenance, as well as development, modernization, and enhancements to existing systems); and (2) modernization efforts in support of IRS’s goals. For IRS’s 23 major IT investments, the amount requested for fiscal year 2017 is $1.8 billion, which is funded primarily through the Operations Support appropriation account as shown in figure 4. In previous years, IRS reported data on its IT investments in the CJ in a Summary of Capital Investments and Portfolio of Major Investments. The Summary of Capital Investments listed major and non-major IT investment totals and major IT investments by funding source. The Portfolio of Major Investments included a comprehensive list and description of major IT investments. For fiscal year 2017, IRS moved the Summary of Capital Investments from the CJ to a link on Treasury’s website that was accessible 30 days following the release of the President’s budget. This website also includes a Capital Investment Plan, similar to the Portfolio of Major Investments. Treasury provides capital investment information on its website for each Treasury bureau. Treasury is required to submit a Capital Investment Plan to Congress no later than 30 days following the submission of the President’s budget. According to IRS and Treasury officials, Treasury asked IRS to move the capital investment information from the CJ to a separate website to give Treasury additional time to review the data to improve reliability. According to IRS officials, this approach also eliminated the possibility of administrative errors in transcribing data from one database to another. While the move delayed the availability of the IRS information, the timing was consistent with capital investment reporting by other Treasury bureaus. In the fiscal year 2017 IRS Summary of Capital Investments, Treasury reported the non-major IT investment total inaccurately for the 3 fiscal years presented (fiscal years 2015 actual, 2016 enacted, and 2017 requested). Treasury underreported the amounts by about $4 million (less than 1 percent) in each fiscal year. Consequently, Treasury also reported the IT total for major and non-major IT investments inaccurately. According to IRS and Treasury officials, this discrepancy was the result of an error introduced during the 30-day Treasury review process. IRS enters IT investment information into Treasury’s SharePoint Investment Knowledge Exchange (SPIKE) system. IRS and Treasury review and monitor the information before the Capital Investment Plan and the Summary of Capital Investments reports are generated. According to IRS officials, for fiscal year 2017, Treasury took a more active role in reviewing the information submitted by IRS on IT investments. During the review process, manual adjustments in SPIKE caused an error that resulted in two rows of non-major IT investments being excluded from the non-major IT total. We asked IRS about the discrepancy, and IRS approached Treasury with the issue and Treasury subsequently corrected the error in SPIKE and revised the Summary of Capital Investments on Treasury’s website. Stronger internal controls could help prevent such mistakes, such as effective monitoring of Treasury-generated IRS information technology investment reports. According to federal internal control standards, ongoing monitoring should occur in the course of normal operations. Monitoring should be performed continually and be ingrained in the agency’s operations. It includes regular management and supervisory activities, comparisons, reconciliations, and other actions people take in performing their duties. According to Treasury officials, Treasury is aware of the need to reduce manual corrections by making improvements to SPIKE, but has yet to take steps to fully ensure that such errors will not occur in future budget cycles. In June 2015, we reported on a separate ongoing monitoring internal control issue which resulted in IRS providing inaccurate data on actual obligations to date for major IT investments in its fiscal year 2016 CJ. As a result of our recommendation that IRS implement internal controls to ensure the accuracy of information on major IT investments reported in the annual CJ, IRS took additional steps when preparing the fiscal year 2017 CJ. This included performing an operational review to examine the existing procedures. In addition, for the fiscal year 2017 IT investment reports, IRS implemented processes to ensure accurate and reliable data such as comparing its IT data maintained on control charts to the data it enters in SPIKE. IRS performed this reconciliation individually for each IT investment. However, IRS reported that it did not review the Summary of Capital Investments generated by Treasury in its entirety for accuracy after it was generated from SPIKE. Without effectively monitoring IT investment information, Treasury risks continued errors in the information it reports on its IT investments. Such errors could negatively affect Congress’s ability to obtain accurate information on IT investments needed to inform future budget decisions and oversight. IRS intended to improve its budget process by aligning its spending priorities with themes supporting its future state vision, but the effort remains a work in progress. For fiscal year 2017, IRS did not make clear how spending by themes relates to appropriation accounts and how this advances IRS’s priorities; this linkage is important to the clarity and transparency of IRS’s budget presentation. Appropriations staff told us this information would help them make informed budget and oversight decisions. While IRS faces data challenges that may limit its ability to fully link funding requests to appropriation accounts, providing these linkages to the extent feasible will improve transparency and provide Congress with information to assist in making informed decisions. Additionally, accurate and timely budget data are key to effective congressional oversight. Since IT is such a significant portion—about 21 percent—of the total budget request for IRS, it is particularly important to have robust controls in place to ensure the data’s accuracy. To enhance the budget process and to improve transparency, we recommend that the Commissioner of Internal Revenue, to the extent feasible, ensure that the CJ includes data by appropriation account on the amount of funding requested to maintain current services for each future state theme. As Treasury works with IRS to improve the quality and accuracy of budget data, we recommend that the Secretary of the Treasury ensure sufficient controls are in place to make certain that the information technology investment reports generated from SPIKE are accurate. This includes, for example, taking steps to reduce the need for manual corrections to the data. We provided a draft of this report to the Commissioner of Internal Revenue and the Secretary of the Treasury for comment. In written comments reproduced in Appendix IV, IRS agreed with the recommendation related to the presentation of data in the Congressional Justification. IRS plans to provide a robust description of planed activities and outcomes for funding requested to maintain current services. Given IRS’s emphasis on the future state, budget data on the amount requested to maintain current services for each theme is particularly valuable. In a separate email response, Treasury agreed with the recommendation related to information technology internal controls. Treasury noted that it plans to implement improvements to SPIKE in the next few months that would address our recommendation by avoiding the need for manual corrections moving forward. IRS and Treasury also provided technical comments which were incorporated as appropriate. We are sending copies of this report to the Chairman and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We are also sending copies to the Commissioner of Internal Revenue, the Secretary of the Treasury, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or mctiguej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. In the fiscal year 2017 Capital Investment Plan, the Internal Revenue Service (IRS) did not report life-cycle costs of its major information technology (IT) investments since most investments were considered ongoing with an undetermined useful life. Instead, IRS provided total anticipated outlays for the investments through fiscal year 2021. Recommendations Develop a quantitative measure of scope At a minimum, develop a quantitative measure of scope for its major information technology (IT) investments, in order to have complete information on the performance of these investments. Benefit A quantitative measure of scope is a good practice as it provides an objective measure of whether an investment delivered the functionality that was paid for. Status The Internal Revenue Service’s (IRS) position on this recommendation has changed over time. IRS agreed with the recommendation we made in June 2012, but stated it had other methods in place to document delivered functionality of a project throughout the life- cycle. However, these methods did not provide a quantitative measure of performance. In responding to a related report we issued in April 2014, IRS maintained its position and therefore did not take action to address the recommendation. In the December 2015 quarterly report on information technology to Congress, however, IRS proposed a solution for one investment: specifically, it listed specific “scope elements” for the Return Review Program investment and identified the elements it had implemented to date. In addition, during our recent review of IRS’s major IT investments, we found that IRS had developed a quantitative measure of scope for two investments, although we noted that the measure could be improved by accounting for the work performed by IRS staff in accordance with best practices (GAO-16-545). The measure used in the quarterly report to Congress and the one we noted during our June 2016 review are positive steps. Similar, continued efforts by IRS would help fully address our recommendation to develop a quantitative measure of scope for all major IT investments. Developing a cost estimate that meets additional best practices, will foster accountability, improve insight, and provide objective information. capture actual costs and use them as a basis for future updates. Explain why variances occurred between the current estimate and previous estimates. IRS agreed with all of the actions recommended, except using earned value management and validating the PPACA cost estimate by preparing a second, independent cost estimate, in part because of cost and burden. In February 2015, IRS released version 3 of the PPACA cost estimate, which reflects best practices to a greater extent. However, four elements of this recommendation remain open. IRS improved the variance from the prior estimate, but IRS has not improved its practices related to the use of earned value management, risk and uncertainty analysis, or validating the estimate. IRS released version 4 of the cost estimate in April 2016 and we will continue to monitor IRS’s progress. with best practices, and develop and document plans to address risks. Validate the original cost estimate by preparing a second, independent cost estimate. Develop a long-term strategy Develop a long-term strategy to address operations amidst an uncertain budget environment. As part of the strategy, IRS should take steps to improve its efficiency, including Reexamine programs, related processes, and organizational structures to determine whether they are effectively and efficiently achieving IRS’s mission. Streamline or consolidate management or operational processes and functions to make them more cost effective. Developing a long-term strategy will enhance budget planning and improve decision making and accountability. IRS agreed with our recommendation and is taking steps to implement it. IRS has adopted a new, more strategic, approach to identify and select budget program priorities based on future state themes for the fiscal year 2017 budget request. As part of its planning process, IRS prioritized a subset of 5 objectives for action as part of the 19 objectives identified in the IRS 2014-2017 Strategic Plan. Operating division officials submitted requests for resources that they thought were necessary to achieve the five priority objectives, and they identified initiatives that were the highest priority to IRS. To guide IRS toward the future state concept and assess progress along the way, IRS has identified enterprise goals, but as of May 2016, these goals were still under development. Enhance calculation and use of actual return on investment (ROI) data Calculate actual ROI for implemented initiatives, compare the actual ROI to projected ROI, and provide the comparison to budget decision makers for initiatives where IRS allocated resources. Use actual ROI calculations as part of resource allocation decisions. Enhanced calculation of ROI provides greater insight on the productivity of a program and can inform decision making. Status allocation decisions within the correspondence exam program. IRS plans to use these estimates to inform future examination plans as we recommended in June 2014, but considerable work remains in this long-term effort. In July 2015, IRS officials reported there is no timeline for full implementation. In June 2016, IRS officials confirmed that projected revenue will be considered in investment decision making as part of fiscal year 2018 enterprise planning guidance, but did not report any progress in using actual ROI data. In addition to the individual named above, the following staff made key contributions to this report: Thomas Gilbert, Assistant Director; Melissa King, Analyst-in-Charge; Charles Fox; Robert Gebhart; Carol Henn; Laurie King; Edward Nannenhorn; Sabine Paul; Bradley Roach; Robert Robinson; Cynthia M. Saunders; Andrew J. Stephens; and Elwood White. | Funding the federal government depends largely upon IRS's ability to collect taxes, including providing taxpayer services that make voluntary compliance easier and enforcing tax laws to ensure compliance with tax responsibilities. For fiscal year 2017, the President requested $12.3 billion in appropriations for IRS; the request is almost $1 billion (9 percent) more than IRS's fiscal year 2016 appropriation. Because of the size of IRS's budget and the importance of its service and compliance programs for all taxpayers, GAO was asked to review the fiscal year 2017 budget request for IRS. In March 2016, GAO reported interim information on IRS's budget. This report assesses (1) the extent to which IRS's fiscal year 2017 CJ presents data on requested funding levels by appropriation accounts and in alignment with agency priorities, (2) IRS's management and allocation of user fees, and (3) the costs and reporting of IRS's IT investments. GAO reviewed the fiscal year 2017 CJ, documentation on IRS's vision for the future state, IRS budget plans, IT investment reports, and IRS budget data for fiscal years 2011 to 2017, interviewed IRS officials, and met with congressional appropriations staff to discuss the information they want included in the CJ. Congressional justification data . The Internal Revenue Service (IRS) has taken steps to manage its budget more strategically but did not make linkages between priorities and appropriations accounts. IRS prioritized a subset of its 19 strategic objectives for action and established six themes that represent its “future state” vision for tax administration. In the fiscal year 2017 congressional justification (CJ), IRS linked requests for increased funding to themes and included details on how much would be funded by each appropriation account. However, IRS did not provide data on how much it spends in support of each theme or the amount of funding needed to maintain current levels by theme. IRS is working to develop such data, but officials cited challenges with data availability and tracking spending by themes. Such information would provide transparency on the current funding levels which assist Congress in making informed budget decisions. User fee spending . IRS has permanent, indefinite authority to obligate and spend user fee collections, which it obligates as part of its budget execution process. IRS's user fee spend plan must be approved by both the Department of the Treasury (Treasury) and the Office of Management and Budget. IRS was directed to wait 30 days following the submission of the user fee spend plan to Congress before obligating funds. As seen in the table, planned user fee spending increased more than $220 million (79 percent) between fiscal years 2011 and 2016. Of the $509 million planned user fee obligations in fiscal year 2016, the largest amounts are for the Patient Protection and Affordable Care Act ($204 million) and the Foreign Account Tax Compliance Act ($62 million). Information technology data . The President's budget requested $2.8 billion for IRS's information technology (IT) investments which accounted for 21 percent of IRS's budget request for fiscal year 2017. Instead of presenting its IT investment data in its CJ, IRS moved them to a Treasury website. This is consistent with other Treasury bureaus and was intended to provide time for an enhanced data review process. However, despite the review process, Treasury did not detect an error which resulted in IRS underreporting its total IT investments by about $4 million. According to federal internal control standards, ongoing monitoring should occur in the course of normal operations. Data errors could negatively affect Congress's ability to make budget decisions and provide oversight. GAO recommends that IRS ensure the CJ includes data on the amount of funding requested to maintain current services for each future state theme, and that Treasury ensure the accuracy of Treasury-generated IRS IT investment reports. IRS and Treasury agreed with the recommendations. |
Drinking water can come from either groundwater sources, via wells, or from surface water sources such as rivers, lakes, and streams. All sources of drinking water contain some naturally occurring contaminants. As water flows in streams, sits in lakes, and filters thorough layers of soil and rock in the ground, it dissolves or absorbs the substances that it touches. Some of these contaminants are harmless, but others can pose a threat to drinking water, such as improperly disposed-of chemicals, pesticides, and certain naturally occurring substances. Likewise, drinking water that is not properly treated or disinfected, or which travels through an improperly maintained water system, may pose a health risk. However, the presence of contaminants does not necessarily indicate that water poses a health risk—all drinking water may reasonably be expected to contain at least small amounts of some contaminants. As of July 2006, EPA had set standards for approximately 90 contaminants in drinking water that may pose a risk to human health. According to EPA, water that contains small amounts of these contaminants, as long as they are below EPA’s standards, is safe to drink. However, EPA notes that people with severely compromised immune systems and children may be more vulnerable to contaminants in drinking water than the general population. Camp Lejeune began operations in the 1940s. The base covers approximately 233 square miles in Onslow County, North Carolina, and includes training schools for infantry, engineers, service support, and medical support, as well as a Naval Hospital and Naval Dental Center. Base housing at Camp Lejeune consists of enlisted family housing, officer family housing, and bachelor housing, which consists of barracks for unmarried service personnel. The base has nine family housing areas, and families live in base housing for an average of 2 years. Additionally, schools, day care centers, and administrative offices are located on the base. Approximately 54,000 people currently live and work at Camp Lejeune, including about 43,000 active duty personnel and 11,000 military dependents and civilian employees. In the 1980s, Camp Lejeune obtained its drinking water from as many as eight water systems, which were fed by more than 100 individual wells that pumped water from a freshwater aquifer located approximately 180 feet below the ground. Each of Camp Lejeune’s water systems included wells, a water treatment plant, reservoirs, elevated storage tanks, and distribution lines to provide the treated water to the systems’ respective service areas. Drinking water at Camp Lejeune has been created by combining and treating groundwater from multiple individual wells that are rotated on and off, so that not all wells are providing water to the system at any given time. Water is treated in order to remove minerals and particles and to protect against microbial contamination. (See fig. 1 for a description of how a Camp Lejeune water system operates.) From the 1970s through 1987, Hadnot Point, Tarawa Terrace, Holcomb Boulevard, and Rifle Range water systems provided drinking water to most of Camp Lejeune’s housing areas. (See fig. 2 for the locations of these water service areas.) The water treatment plants for the Hadnot Point and Tarawa Terrace water systems were constructed during the 1940s and 1950s. The Rifle Range water system was constructed in 1965. The water treatment plant for the Holcomb Boulevard water system began operating at Camp Lejeune in 1972; prior to this time, the Hadnot Point water system provided water to the Holcomb Boulevard service area. In the 1980s, each of these four systems had between 4 and 35 wells that could provide water to their respective service areas. In 1987 the Tarawa Terrace water treatment plant was shut down and the Holcomb Boulevard water distribution system was expanded to include the Tarawa Terrace water service area. Generally, housing units served by the Tarawa Terrace and Holcomb Boulevard water systems consisted of family housing, which included single- and multifamily homes and housing in trailer parks. Housing units served by the Hadnot Point water system included mainly bachelor housing with limited family housing. The housing area served by the Rifle Range water system included both family housing and bachelor housing. Based on available housing data for the late 1970s and the 1980s, the estimated annual averages of the number of people living in family housing units served by these water systems at that time were: 5,814 people in units served by the Tarawa Terrace water system, 6,347 people in units served by the Holcomb Boulevard water system, 71 people in units served by the Hadnot Point water system, and 14 people in units served by the Rifle Range water system. In addition to serving housing units, all four water systems provided water to base administrative offices. The Tarawa Terrace, Holcomb Boulevard, and Hadnot Point water systems also served schools and other recreational areas. Additionally, the Hadnot Point water system also served an industrial area and the base hospital, and the Rifle Range water system also served an area used for weapons training. The Department of the Navy consists of the Navy and the Marine Corps; consequently, certain Navy entities provide support functions for Marine Corps bases, such as Camp Lejeune. Two entities provide support for environmental issues: The Naval Facilities Engineering Command began providing environmental support for bases in the 1970s. The Naval Facilities Engineering Command, Atlantic Division (LANTDIV) provides environmental support for Navy and Marine Corps bases in the Atlantic and mid-Atlantic regions of the United States. For example, LANTDIV officials work with Camp Lejeune officials to establish environmental cleanup priorities and cost estimates and to allocate funding to ensure compliance with state and federal environmental regulations. The Navy Environmental Health Center (NEHC) has provided environmental and public health consultation services for Navy and Marine Corps environmental cleanup sites since 1991. NEHC is also designated as the technical liaison between Navy and Marine Corps installations and ATSDR, and as a part of this responsibility, reviews and comments on all ATSDR reports written for Navy and Marine Corps sites prior to publication. Prior to 1991, no agency was designated to provide public health consultation services for Navy and Marine Corps sites. In 1980, the Department of the Navy established the Navy Assessment and Control of Installation Pollutants (NACIP) program to identify, assess, and control environmental contamination from past hazardous material storage, transfer, processing, and disposal operations. Under the NACIP program, initial assessment studies were conducted to determine the potential for environmental contamination at Navy and Marines Corps bases. If, as a result of the study, contamination was suspected, a follow- up confirmation study and corrective measures were initiated. In 1986 the Navy replaced its NACIP program with the Installation Restoration Program. The purpose of the Installation Restoration Program is to reduce, in a cost effective manner, the risk to human health and the environment from past waste disposal operations and hazardous material spills at Navy and Marine Corps bases. Cleanup is done in partnership with EPA, state regulatory agencies, and members of the community. EPA was established in 1970 to consolidate in one agency a variety of federal research, monitoring, standard-setting, and enforcement activities to ensure environmental protection. EPA’s primary roles and functions include developing and enforcing environmental regulations; conducting environmental research; providing financial assistance to states, educational institutions, and other nonprofit entities that conduct environmental research; and furthering public environmental education. Congress passed the Safe Drinking Water Act in 1974 to protect the public’s health by regulating the nation’s public drinking water supply. The Safe Drinking Water Act, as amended, is the key federal law protecting public water supplies from harmful contaminants. For example, the act requires that all public water systems conduct routine tests of treated water to ensure that the water is safe to drink. Required water testing frequencies vary and range from weekly testing for some contaminants to testing every 3 years for other contaminants. The act also established a federal-state arrangement in which states may be delegated primary implementation and enforcement authority for the drinking water program. For contaminants that are known or anticipated to occur in public water systems and that EPA determines may have an adverse impact on health, the act requires EPA to set a nonenforceable maximum contaminant level goal, at which no known or anticipated adverse health effects occur and that allows an adequate margin of safety. Once the maximum contaminant level goal is established, EPA sets an enforceable standard for water as it leaves the treatment plant, the maximum contaminant level. A maximum contaminant level is the maximum permissible level of a contaminant in water delivered to any user of a public water system. The maximum contaminant level must be set as close to the goal as is feasible using the best technology or other means available, taking costs into consideration. The North Carolina Department of Environment and Natural Resources and its predecessors have had primary responsibility for implementation of the Safe Drinking Water Act in North Carolina since 1980. In 1979, EPA promulgated final regulations applicable to certain community water systems establishing the maximum contaminant levels for the control of TTHMs, which are a type of VOC that are formed when disinfectants—used to control disease-causing contaminants in drinking water—react with naturally occurring organic matter in water. The regulations required that water systems that served more than 10,000 people and which added a disinfectant as part of the drinking water treatment process to begin mandatory water testing for TTHMs by November 1982 and comply with the maximum contaminant level by November 1983. TCE and PCE were not among the contaminants included in these regulations. In 1979 and 1980 EPA issued nonenforceable guidance establishing “suggested no adverse response levels” for TCE and PCE in drinking water and in 1980 issued “suggested action guidance” for PCE in drinking water. Suggested no adverse response levels provided EPA’s estimate of the short- and long-term exposure to TCE and PCE in drinking water for which no adverse response would be observed and described the known information about possible health risks for these chemicals. Suggested action guidance recommended remedial actions within certain time periods when concentrations of contaminants exceeded specific levels. Suggested action guidance was issued for PCE related to drinking water contamination from coated asbestos-cement pipes, which were used in water distribution lines. The initial regulation of TCE and PCE under the Safe Drinking Water Act began in 1989 and 1992, respectively, when maximum contaminant levels became effective for these contaminants. (See table 1 for the suggested no adverse response levels, suggested action guidance, and maximum contaminant level regulations for TCE and PCE.) The Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) of 1980 established what is known as the Superfund program to clean up highly contaminated waste sites and address the threats that these sites pose to human health and the environment, and assigned responsibility to EPA for administering the program. CERCLA was amended by the Superfund Amendments and Reauthorization Act (SARA) of 1986. Among other things, SARA requires that federal agencies, including DOD, that own or operate facilities on EPA’s CERCLA list of seriously contaminated sites, known as the National Priorities List, enter into an interagency agreement with EPA. The agreement is to specify what cleanup activities, if any, are required, and to set priorities for carrying out those activities. SARA also established the Defense Environmental Restoration Program, through which DOD conducts environmental cleanup activities at military installations. Under the environmental restoration program, DOD’s activities addressing hazardous substances, pollutants, or contaminants are required to be carried out consistent with the provisions of CERCLA governing environmental cleanups at federal facilities. Based on environmental contamination at various areas on the base, Camp Lejeune was designated as a National Priorities List site in 1989. EPA, the Department of the Navy, and the state of North Carolina entered into a Federal Facilities Agreement concerning cleanup of Camp Lejeune with an effective date of March 1, 1991. ATSDR was created by CERCLA and established within the Public Health Service of HHS in April 1983 to carry out Superfund’s health-related activities. These activities include conducting health studies, laboratory projects, and chemical testing to determine relationships between exposure to toxic substances and illness. In 1986, SARA expanded ATSDR’s responsibilities to include, among other things, conducting public health assessments, toxicological databases, information dissemination, and medical education. SARA requires that ATSDR conduct a public health assessment at each site proposed for or on the National Priorities List, and that ATSDR conduct additional follow-up health studies if needed. Potentially responsible parties, including federal agencies, are liable for the costs of any health assessment or health effects study carried out by ATSDR. SARA requires that ATSDR and DOD enter into a memorandum of understanding to set forth the authorities, responsibilities, and procedures between DOD and ATSDR for conducting public health activities at DOD Superfund sites. Based on the memorandum of understanding signed between ATSDR and DOD, ATSDR is required to submit an annual plan of work to DOD, in which it must describe the public health activities it plans to conduct at DOD sites in the following fiscal year, as well as the amount of funding required to conduct these activities. After the annual plan of work has been submitted, DOD has 45 days to respond and negotiate the scope of work to be conducted by ATSDR. The memorandum of understanding states that DOD must seek sufficient funding through the DOD budgetary process to carry out the work agreed upon. According to ATSDR’s Toxicological Profile, inhaling small amounts of TCE may cause headaches, lung irritation, poor coordination, and difficulty concentrating, and inhaling or drinking liquids containing high levels of TCE may cause nervous system effects, liver and lung damage, abnormal heartbeat, coma, or possibly death. ATSDR also notes that some animal studies suggest that high levels of TCE may cause liver, kidney, or lung cancer, and some studies of people exposed over long periods to high levels of TCE in drinking water or workplace air have shown an increased risk of cancer. ATSDR’s Toxicological Profile notes that the National Toxicology Program has determined that TCE is reasonably anticipated to be a human carcinogen and the International Agency for Research on Cancer has determined that TCE is probably carcinogenic to humans. Unlike TCE, the health effects of inhaling or drinking liquids containing low levels of PCE are unknown, according to ATSDR. However, ATSDR reports that exposure to very high concentrations of PCE may cause dizziness, headaches, sleepiness, confusion, nausea, difficulty in speaking and walking, unconsciousness, or death. HHS has determined that PCE may reasonably be anticipated to be a carcinogen. Efforts to identify and address past drinking water contamination at Camp Lejeune began in the 1980s, when the Navy initiated water testing at Camp Lejeune. In 1980, one water test identified the presence of VOCs and a separate test indicated contamination by unidentified chemicals. In 1982 and 1983, water monitoring for TTHMs by a laboratory contracted by Camp Lejeune led to the identification of TCE and PCE as the contaminants in two water systems at Camp Lejeune. Sampling results indicated that the levels of TCE and PCE varied. Former Camp Lejeune environmental officials said they did not take additional steps to address the contamination after TCE and PCE were identified. The former officials recalled that they did not take additional steps because at that time they had little knowledge of TCE and PCE, there were no regulations establishing enforceable limits for these chemicals in drinking water, and variations in water testing results raised questions about the tests’ validity. In 1984 and 1985, NACIP, a Navy environmental program, identified VOCs, including TCE and PCE, in 12 of the wells serving the Hadnot Point and Tarawa Terrace water systems. Camp Lejeune officials removed 10 wells from service in 1984 and 1985. Additionally, information about the contamination was provided to residents. Upon investigating the contamination, DOD and North Carolina officials concluded that both on- and off-base sources were likely to have caused the contamination in the Hadnot Point and Tarawa Terrace water systems. Since 1989, federal, state, and Camp Lejeune officials have partnered to take actions to clean up the sources of contamination and to monitor and protect the base’s drinking water. The presence of VOCs in Camp Lejeune water systems was first detected in October 1980. On October 1, 1980, samples of water were collected from all eight water systems at Camp Lejeune by an official from LANTDIV, a Navy entity which provided environmental support to Camp Lejeune. The water samples were combined into a single sample, and a “priority pollutant scan” was conducted in order to detect possible contaminants in the water systems. The results of this analysis, conducted by a Navy- contracted private laboratory and sent to LANTDIV, identified 11 VOCs, including TCE, at their detection limits, that is, the lowest level at which the chemicals could be reliably identified by the instruments being used. LANTDIV officials we interviewed said they do not remember why this testing was conducted. A memorandum written by a Camp Lejeune environmental official noted that LANTDIV initiated the testing because North Carolina had assumed responsibility in March 1980 for oversight of the Safe Drinking Water Act and therefore would have the right to sample and test the drinking water at Camp Lejeune for any contaminants regulated under the act. The memorandum stated that LANTDIV officials were concerned that the state’s testing might discover problems that the Navy had not previously identified. The Camp Lejeune memorandum characterized the 1980 analysis as indicating “no problems” from the pollutants when the samples from eight water systems were tested as one combined sample, but also noted that this might not have been true if the samples had been analyzed individually. Current and former LANTDIV officials told us that they did not recall any actions taken as a result of this analysis. Separately, in 1980 the Navy began monitoring programs for TTHMs at various Navy and Marine Corps bases, including Camp Lejeune, in preparation for meeting a future EPA drinking water regulation. LANTDIV arranged for an Army laboratory to begin testing the treated water from two Camp Lejeune water systems, Hadnot Point and New River, in October 1980. At that time, these two water systems were the only ones that served more than 10,000 people and therefore would be required to meet the future TTHM regulation. From October 1980 to September 1981, eight samples were collected from the Hadnot Point water system and analyzed for TTHMs. Results from four of the eight samples indicated the presence of unidentified chemicals that were interfering with the TTHM analyses. Reports for each of the four analyses contained an Army laboratory official’s handwritten notes about the unidentified chemicals: two of the notes classified the water as “highly contaminated” and notes for the other two analyses recommended analyzing the water for organic compounds. The exact date when LANTDIV officials began receiving results from TTHM testing is not known, and LANTDIV officials told us that they had no recollection of how or when the results were communicated from the Army laboratory. Available Marine Corps documents indicate that Camp Lejeune environmental officials learned in July 1981 that LANTDIV had been receiving the results of TTHM testing and was holding the results until all planned testing was complete. Subsequently, Camp Lejeune environmental officials requested copies of the TTHM results that LANTDIV had received to date, and LANTDIV provided these results in August 1981. The next documented correspondence from LANTDIV to Camp Lejeune regarding TTHM monitoring occurred in a February 1982 memorandum in which LANTDIV recommended that TTHM monitoring be expanded to all of Camp Lejeune’s water systems and noted that Camp Lejeune should contract with a North Carolina state-certified laboratory for the testing. In early 1981, additional water testing unrelated to the TTHM monitoring began at the Rifle Range area within Camp Lejeune for various contaminants, including TCE and PCE. A former Camp Lejeune official recalled that the testing was initiated because of concerns about chemicals that had been buried at Rifle Range. In March, April, and May 1981, water samples were collected from areas surrounding the chemical dump, including a nearby creek; treated water from the Rifle Range water system; and untreated water from the individual wells serving the water system. These water samples were sent to a Navy-contracted private laboratory for analysis, and the results were sent to a LANTDIV official in April and May 1981. The results for the samples collected from the areas surrounding the chemical dump identified VOCs, including TCE and PCE. The results for the samples collected from the water system’s treated water and for the samples from the untreated water from the individual wells also identified VOCs. In July 1981, LANTDIV communicated the results to Camp Lejeune officials and noted that one of the VOCs detected was a trihalomethane and arrangements had been made to add the Rifle Range water system to the base TTHM testing. LANTDIV also recommended that no further action be taken until additional data became available from TTHM monitoring or the planned NACIP program to identify, assess, and control environmental contamination. Current and former LANTDIV officials recalled that their agency played a limited role in providing information or guidance regarding environmental issues at Camp Lejeune, and that this assistance generally would have been at the request of Camp Lejeune officials. However, former Camp Lejeune environmental officials recalled that at that time they had little experience in water quality issues and relied on LANTDIV to serve as their environmental experts. Documents from 1981 indicate that LANTDIV officials continuously communicated information about the Rifle Range area to Camp Lejeune environmental officials, including providing sampling results, discussing the implications of these results, providing copies of related regulations and standards, and making recommendations for additional action. (See app. II for a more detailed description of selected events related to drinking water contamination at Camp Lejeune from 1980 through 1981.) Following LANTDIV’s recommendation to expand TTHM monitoring to all base water systems, Camp Lejeune officials contracted with a private state-certified laboratory to test samples of treated water from all eight of their water systems. According to an August 1982 memorandum, in May 1982 a Camp Lejeune official was informed during a telephone conversation with a private laboratory official that organic cleaning solvents, including TCE, were present in the water samples for TTHM monitoring from the Hadnot Point and Tarawa Terrace water systems. In July 1982, additional water samples from the two systems were collected in an effort to investigate the presence of these chemicals. In August 1982 the contracted laboratory sent a letter to base officials informing them that TCE and PCE were identified from the May and July samples as the contaminants. According to the letter, the testing determined that the Hadnot Point water system was contaminated with both TCE and PCE and the Tarawa Terrace water system was contaminated with PCE. The letter also noted that TCE and PCE “appeared to be at high levels” and were “more important from a health standpoint” than the TTHM monitoring. Sampling results indicated that the levels of TCE and PCE varied. The letter noted that one sample taken in May 1982 from the Hadnot Point water system contained TCE at 1,400 parts per billion and two samples taken in July 1982 contained TCE at 19 and 21 parts per billion. Four samples taken in May 1982 and July 1982 from the Tarawa Terrace water system contained levels of PCE that ranged from 76 to 104 parts per billion. (See table 2 for the May and July 1982 sampling results.) Former Camp Lejeune environmental officials recalled that after the private laboratory identified the TCE and PCE in the two water systems, they did not take additional steps to address the contamination for three reasons. First, they had limited knowledge of these chemicals; second, there were no regulations establishing enforceable limits for these chemicals in drinking water; and third, they made assumptions about why the levels of TCE and PCE varied and about the possible sources of the TCE and PCE. The former Camp Lejeune environmental officials told us that they were aware of EPA guidance, referred to as “suggested no adverse response levels,” for TCE and PCE when these contaminants were identified at Camp Lejeune. However, they noted that the levels of these contaminants detected at Camp Lejeune generally were below those outlined in the guidance. One Camp Lejeune environmental official also recalled that at the time they were unsure what the health effects would be for the lower amounts detected at the base. Additionally, in an August 1982 document and during our interviews with current Camp Lejeune environmental officials, it was noted that EPA had not issued regulations under the Safe Drinking Water Act for TCE and PCE when the private laboratory identified these chemicals in the drinking water. The former Camp Lejeune environmental officials also said that they made assumptions about why the levels of TCE and PCE varied in sampling results and about the possible sources of the TCE and PCE. Specifically, because the levels of TCE and PCE varied, they attributed the higher levels to short-term environmental exposures, such as spilled paint inside a water treatment plant, or to laboratory or sampling errors. Additionally, in an August 1982 memorandum, a Camp Lejeune environmental official suggested that, based on the sampling results provided by the private laboratory, the levels of PCE detected could be the result of using coated pipes in the untreated water lines at Tarawa Terrace. The former Camp Lejeune environmental officials told us that in retrospect, it was likely that well rotation in these water systems contributed to the varying sampling results because the contaminated wells may not have been providing water to the Hadnot Point and Tarawa Terrace systems at any given time. However, both they and current Camp Lejeune environmental officials said that at that time the base environmental staff did not know that the wells serving both systems were rotated. After August 1982, the private laboratory continued to communicate with Camp Lejeune officials about the contamination of treated water from the Hadnot Point and Tarawa Terrace water systems. All eight of Camp Lejeune’s water systems were sampled again for TTHMs in November 1982. In a December 1982 memorandum, a Camp Lejeune environmental official noted that during a phone conversation with a chemist from the private laboratory the chemist expressed concern that TCE and PCE were interfering with Tarawa Terrace and Hadnot Point TTHM samples. The chemist said the levels of TCE and PCE were “relatively high” in the November 1982 samples, though the specific levels of TCE and PCE were not provided to Camp Lejeune officials. The private laboratory report providing the November 1982 results said that the samples from Tarawa Terrace “show contamination” from PCE and the samples from Hadnot Point “show contamination” from both TCE and PCE. All eight of Camp Lejeune’s water systems were sampled again for TTHMs in August 1983, and the private laboratory report providing these results said that the samples from Tarawa Terrace “show contamination” from PCE and the samples from Hadnot Point “show contamination” from both TCE and PCE. Former Camp Lejeune environmental officials recalled that they did not take any actions related to these findings. (See app. III for a more detailed timeline of selected events from 1982 through 1983.) In 1982, Navy officials initiated the NACIP program at Camp Lejeune as part of its overall strategy to identify, assess, and control environmental contamination at Navy and Marine Corps bases. The first step of the NACIP program was an initial assessment study, which was designed to collect and evaluate evidence that indicated the existence of pollutants that may have contaminated a site or that posed a potential health hazard for people located on or off a military installation. The initial assessment study for Camp Lejeune, which was completed in April 1983, determined that further investigation was warranted at 22 priority sites with potential contamination, including a site near wells that served the Hadnot Point water system. In July 1984, the base initiated a NACIP confirmation study to investigate the 22 priority sites. As a part of the confirmation study, a Navy contractor took water samples from water supply wells located near priority sites where groundwater contamination was suspected. Current and former Camp Lejeune officials told us that previous water samples usually had been collected from treated water at sites such as reservoirs or buildings within the water systems rather than being collected directly from individual wells at Camp Lejeune. In November 1984, Camp Lejeune officials received sampling results for one Hadnot Point well located near a priority site, which showed that TCE and PCE, among other VOCs, were detected in the well. This well was removed from service, and in December 1984, water samples from six Hadnot Point wells that were located in the same general area and treated water samples from the Hadnot Point water plant were also tested. Results of the analysis of the well samples indicated that both TCE and PCE were detected in one well, TCE was detected in two additional wells, and other VOCs were detected in all six wells. Results for the treated water samples also detected TCE and PCE. Four of these six wells were removed from service, in addition to the original well removed from service. For the two wells that were not taken out of service, while initial results indicated levels of VOCs, including TCE, other test results showed no detectable levels of VOCs. Documents we reviewed show that continued monitoring of those two wells indicated no detectable levels of TCE. During December 1984, seven additional samples were taken from the treated water at Hadnot Point water plant and revealed no detectable levels of TCE and PCE. According to two former Camp Lejeune environmental officials, once the wells had been taken out of service and the samples from the water plant no longer showed detectable levels of TCE or PCE, they believed the water from the Hadnot Point water system was no longer contaminated. Although the December 1984 testing of water from the Hadnot Point water system showed no detectable levels of TCE or PCE, in mid-January 1985 Camp Lejeune environmental staff began collecting water samples from all wells on the base. Sampling results were received in February 1985 and detected VOCs, including TCE and PCE, in 3 wells serving the Hadnot Point water system and 2 wells serving the Tarawa Terrace water system. As a result, those 5 wells were removed from service. According to current Camp Lejeune officials, all 10 wells had been removed from service by February 8, 1985. According to memoranda dated March 1985 and May 1985, 1 of the 2 wells removed from service at Tarawa Terrace was used on 1 day in March 1985 and on 3 days in April 1985 for short periods of time to meet water needs at the base. See table 3 for the dates that wells were removed from service and for the levels of TCE and PCE which were detected in the wells prior to their removal from service in 1984 and 1985. See app. IV for the levels of other VOCs which were detected in the wells prior to their removal from service in 1984 and 1985. In addition, while base officials were waiting for sampling results from January 1985 of samples collected from wells serving Hadnot Point, water from this system was provided to a third water system for about 2 weeks. In late January 1985, a fuel line break caused gasoline to leak into the Holcomb Boulevard water treatment plant. During the approximately 2-week period the treatment plant was shut down, water from the Hadnot Point system was pumped into the Holcomb Boulevard water lines. Former Camp Lejeune environmental officials said that they used water from the Hadnot Point water system because it was the only water system interconnected with the Holcomb Boulevard water system, and because they believed the water from the Hadnot Point water system was no longer contaminated. Prior to restarting the Holcomb Boulevard water system, samples of treated water were tested and no gasoline was detected in any of these samples. However, the samples were found to contain various levels of TCE; these results were attributed to the use of water from the Hadnot Point water system. About 5 days after these samples were taken, the Holcomb Boulevard water system was restarted because the fuel line had been repaired. “Two of the wells that supply Tarawa Terrace have had to be taken off line because minute (trace) amounts of several organic chemicals have been detected in the water. There are no definitive State or Federal regulations regarding a safe level of these compounds, but as a precaution, I have ordered the closure of these wells for all but emergency situations when fire protection or domestic supply would be threatened.” The notice asked residents to reduce water use until early June, when the construction of a new water line was to be completed. In May 1985, another article in the base newspaper stated the number of wells that had been removed from service, stated why the wells were removed from service, and noted the potential for water shortage at Tarawa Terrace as a result. In addition, the Marine Corps provided us with copies of three North Carolina newspaper articles published from May 1985 to September 1985 discussing contamination at Camp Lejeune. All three articles included information about the drinking water contamination and noted that 10 wells serving two water treatment systems at Camp Lejeune had been removed from service. (See app. V for a more detailed timeline of selected documented events from 1984 through 1985.) The sources of past contamination for the Hadnot Point water system have not been conclusively determined. However, DOD officials have estimated that eight contaminated on-base sites in the proximity of the Hadnot Point water system may be the sources of contamination for that water system. (See table 4.) These eight sites were contaminated by leaking underground storage tanks containing fuel, by degreasing solvents, by hazardous chemical spills, and by other waste disposal practices. Efforts by ATSDR are ongoing to conclusively determine the sources of past contamination in the Hadnot Point water system, as well as when the contamination began. For the Tarawa Terrace water system, North Carolina officials determined that an off-base source was the likely cause of the drinking water contamination. After the Marine Corps requested assistance in identifying the source of the contamination, North Carolina state officials conducted an investigation from April 1985 through September 1985 to determine whether two off-base dry cleaning facilities located near the two contaminated wells were the sources of the PCE contamination at Tarawa Terrace. The state officials concluded that the contamination likely came from dry cleaning solvent that had been released into a leaking septic tank at one of the cleaners—ABC One Hour Cleaners—which built its septic system and began operation in 1954. Both the dry cleaning facility and its septic tank were located off base but adjacent to a supply well for the Tarawa Terrace water system. Based on the environmental contamination at this site, ABC One Hour Cleaners was designated as a National Priorities List site in 1989. As part of its current health study, ATSDR has estimated that beginning as early as 1957 individuals were exposed to PCE in treated drinking water at levels equal to or greater than what became effective in 1992 as EPA’s maximum contaminant level of 5 parts per billion. Since 1989, officials from Camp Lejeune, North Carolina, and federal agencies, including EPA, have taken actions to clean up the suspected sources of the contamination in the Hadnot Point and Tarawa Terrace water systems. Because the contamination is thought to have come from both on- and off-base sources, and because those sources are part of two separate National Priorities List sites—Camp Lejeune and ABC One Hour Cleaners—cleanup activities for the suspected sources of contamination are being managed separately. Following Camp Lejeune’s listing as a National Priorities List site in October 1989 and the signing of a Federal Facilities Agreement in February 1991, on-base cleanup activities have been managed by a partnership of DOD, EPA, and North Carolina environmental officials. Cleanup of the eight sites suspected to be possible sources of contamination for the Hadnot Point water system has included the removal of contaminated soils and gasoline storage tanks and the treatment of contaminated groundwater and soils. The cleanup activities at four of the eight sites were completed by 2006. The estimated completion date for cleanup activities of contaminated groundwater and soils at three of the other four sites is 2025. There is no estimated completion date for the fourth site. Funding for the cleanup of the on-base sites has come from Department of the Navy Environmental Restoration Program funds, and Navy officials estimated that about $70 million would be needed to complete the cleanup of all eight sites. Efforts to clean up the suspected source of contamination that affected the Tarawa Terrace water system began after ABC One Hour Cleaners was listed as a National Priorities List site in 1989. Cleanup activities at the site, which have been designed to address both the contaminated groundwater and soil, have been managed by EPA, with support from North Carolina officials. While treatment of some of the areas with contaminated soil has been completed, the EPA official who serves as project manager for the ABC One Hour Cleaners site could not provide an estimated completion date for cleanup of either the soil or the groundwater. Funding for the cleanup of this site comes primarily from the Superfund, though a portion of the funds has been provided by ABC One Hour Cleaners and North Carolina. The total estimated cost for the cleanup of this site is about $4.3 million. According to a North Carolina official, North Carolina will assume authority for cleanup at the site in August 2013. Currently, Camp Lejeune uses various methods to monitor and protect the base’s drinking water. In drinking water reports published in 2004 and available on the Camp Lejeune Web site, base officials stated that their efforts to monitor the drinking water supply had met or exceeded all required testing standards. For example, Camp Lejeune reported that “in accordance with Safe Drinking Water Act sampling requirements” it had regularly tested its treated drinking water for more than 80 different EPA- regulated contaminants and additional unregulated contaminants. The reports noted that testing of treated water for VOCs had been conducted on a monthly basis—exceeding the requirement to test every 3 years—“in order to show that there should be no concern about current VOC contamination.” The Camp Lejeune reports stated that the base had sampled the wells at least annually for VOCs. Additionally, the Water Quality Program at Camp Lejeune produces annual reports about each drinking water system on the base in order to inform water consumers about the quality of their water. The 2004 reports also stated that Camp Lejeune officials have undertaken numerous efforts to protect the drinking water supply, including restricting land uses near well fields, locating well fields in undeveloped areas, constructing wells in a manner that minimizes the potential for contamination, and using new technologies to prevent groundwater contamination. Examples of some of these new technologies included a computer-based monitoring system for underground storage tanks that immediately alerts personnel when a leak occurs, and the installation of bullet traps at firing areas, which prevent lead and copper bullets from contaminating the groundwater and soil. Concerns about possible adverse health effects and government actions related to the past drinking water contamination have led to additional activities, including health studies, claims against the federal government, and federal inquiries. Activities resulting from concerns about possible adverse health effects began in 1991, when ATSDR initiated a public health assessment that evaluated the possible health risks from past exposure to the contaminated drinking water at Camp Lejeune. The health assessment was followed by two studies, one of which was ongoing as of April 2007. Since ATSDR began its work, the agency did not always receive requested funding and experienced delays in receiving information from DOD entities. However, ATSDR officials said that the agency’s Camp Lejeune- related work was not significantly delayed by DOD. As of January 2007, about 750 claims had been filed by former Camp Lejeune residents and employees against the federal government for injuries alleged to have resulted from past exposure to the contaminated drinking water at Camp Lejeune. Additionally, three federal inquiries into issues related to the drinking water contamination at Camp Lejeune have been conducted, one by a Marine Corps-chartered panel, one by the EPA OIG, and one by the EPA CID. The inquiry conducted by the Marine Corps-chartered panel found that the Marine Corps acted responsibly and found no evidence that the Marine Corps had attempted to cover up information that indicated contamination in Camp Lejeune’s drinking water. However, the Marine Corps-chartered panel also criticized some actions taken by Camp Lejeune and Department of the Navy officials, such as inadequate communications among these entities about the drinking water contamination. The EPA OIG found that some EPA officials’ responses to a citizen’s requests regarding Camp Lejeune-related documents were inadequate or inappropriate. The EPA CID investigation did not find any violations of federal law but criticized some actions taken by Marine Corps and Department of the Navy officials, such as a lack of diligence by a Navy environmental support entity in providing technical expertise to Camp Lejeune’s environmental officials. Beginning in 1991, ATSDR has undertaken several activities to study the possible adverse health effects related to the past drinking water contamination at Camp Lejeune, including a public health assessment and two studies. From 1991 to 1997, ATSDR conducted a public health assessment at Camp Lejeune that was required by law because of the base’s listing on the National Priorities List. The health assessment evaluated several ways in which people on base had been exposed to hazardous substances, including exposure to the VOC-contaminated drinking water. ATSDR concluded that (1) cancerous and noncancerous health effects were unlikely in adults exposed to VOC-contaminated drinking water, (2) the likelihood of either noncancerous or cancerous health effects in children could not be determined because of insufficient scientific information, and (3) there was evidence that suggested that, because of their developing systems, individuals who were exposed in utero were potentially more sensitive to the effects of VOCs than individuals who were exposed as adults or children. In its 1997 report, ATSDR recommended that a study be carried out to evaluate the risks of childhood cancer in those who were exposed in utero to the contaminated drinking water and also noted that adverse pregnancy outcomes were of concern. ATSDR officials said that the health assessment did not recommend a study of adverse pregnancy outcomes because such a study was already under way. In 1995, while the health assessment was being conducted, ATSDR initiated a study to determine whether there was an association between exposure to VOCs in drinking water and specific adverse pregnancy outcomes among women who had lived at Camp Lejeune from 1968 through 1985. The study, released in 1998, originally concluded that there was a statistically significant elevated risk for several poor pregnancy outcomes, including (1) small for gestational age among male infants born to mothers living at Hadnot Point, (2) small for gestational age for infants born to mothers over 35 years old living at Tarawa Terrace, and (3) small for gestational age for infants born to mothers with two or more prior fetal losses living at Tarawa Terrace. However, ATSDR officials said they are reanalyzing the findings of this study because of an error in the original assessment of exposure to VOCs in drinking water. While the study originally assessed births from 1968 to 1972 in the Holcomb Boulevard service area as being unexposed to VOCs, these births were exposed to contaminants from the Hadnot Point water system. An ATSDR official said the reanalysis may alter the study’s results. In 1999, ATSDR initiated its current study examining whether certain birth defects and childhood cancers are associated with exposure to TCE or PCE at Camp Lejeune. The study examines whether individuals born during 1968 through 1985 to mothers who were exposed to the contaminated drinking water at any time while they were pregnant and living at Camp Lejeune were more likely than those who were not exposed to have neural tube defects, oral cleft defects, or childhood hematopoietic cancers. The current study began with a survey to identify potential cases of the selected birth defects and childhood cancers. The study is also using water modeling to help ATSDR determine the potential sources of past contamination and estimate when the water became contaminated and which housing units received the contaminated water. The water modeling data will help ATSDR identify which pregnant women may have been exposed to the contaminated water, and will also help ATSDR estimate the amount of TCE and PCE that may have been in the drinking water. ATSDR officials said that the study is expected to be completed by December 2007. ATSDR also has hosted two expert panel meetings related to the past drinking water contamination at Camp Lejeune. In February 2005, ATSDR hosted an expert scientific advisory panel to explore opportunities for conducting additional health studies of people who were potentially exposed to contaminated drinking water at Camp Lejeune. The agency noted that it convened this panel in response to continuing public concern about health effects from past exposure to contaminated drinking water. ATSDR received nine recommendations from its scientific advisory panel in a final report released in June 2005, which include a recommendation to create an advisory panel to oversee future studies and a recommendation that funding for future studies should come from appropriations to ATSDR, not from DOD’s budget. In an August 2005 published response, ATSDR agreed with all but three of the scientific advisory panel’s recommendations. (See app. VI for ATSDR’s panel recommendations and ATSDR’s response.) ATSDR has taken steps to accomplish three of the recommended activities. In February 2006, ATSDR created a community assistance panel to respond to the two recommendations urging a closer partnership with former Camp Lejeune residents and development of an advisory panel to oversee health studies related to VOC exposures at Camp Lejeune. As of January 2007, the community assistance panel had held four meetings. The panel includes seven former Camp Lejeune residents. Also participating in CAP meetings are one representative from DOD, two independent scientific experts, and ATSDR staff. ATSDR officials said the community assistance panel is comparable with other panels that ATSDR had set up for community participation at National Priorities List sites similar to Camp Lejeune. In response to a recommendation to conduct feasibility or pilot studies before beginning full-scale health studies, ATSDR had begun conducting a feasibility assessment to determine the availability and sufficiency of data needed to conduct several additional health studies related to past drinking water contamination. At the February 2006 community assistance panel meeting, the panel members and ATSDR officials agreed that ATSDR should move forward with the initial stages of planning a mortality study and an adult cancer incidence study of those potentially exposed to contaminated water at Camp Lejeune so long as necessary data are available. ATSDR officials said that they had identified databases such as the National Death Index, which contains death records, and state cancer registries that could be used to assist ATSDR with conducting these studies. An ATSDR official said that mortality and cancer incidence studies would potentially be easier to carry out than some other health studies because of the existence of these databases. Since the February 2006 community assistance panel meeting, ATSDR officials have begun reviewing additional databases at the Defense Manpower Data Center and Naval Health Research Center to determine if those databases could be linked to both the National Death Index and state cancer registries, and to Camp Lejeune family housing records. If the feasibility assessment shows that these databases can be used, ATSDR will likely proceed with the two studies, officials said. Additionally, ATSDR officials said they plan to computerize the family housing records at Camp Lejeune that were still in paper format. Officials noted that the fully computerized family housing records might be used as the basis for defining a registry of potentially affected residents, as recommended by the scientific advisory panel, if the feasibility assessment indicates that it is possible to obtain social security numbers and dates of birth for each potential member of the registry. In March 2005, ATSDR hosted a separate expert peer review panel to evaluate the agency’s water modeling and data-gathering efforts at Camp Lejeune. In a report published in October 2005, the expert peer review panel on water modeling made two primary recommendations urging the agency to make additional effort and expend more resources on more rigorous record searches to improve the information for the historical reconstruction of events. ATSDR agreed and had hired new staff and consultants to begin record searches at Camp Lejeune; however, ATSDR officials did not proceed with their record search after they learned that the Marine Corps had separately hired a private contractor to conduct such a search. The Marine Corps’ private contractor completed its document search in August 2006, which yielded more than 6,000 documents. An ATSDR official told us that during a preliminary review of the documents in July 2006, ATSDR determined that the documents were “extremely useful” for its water modeling activities. The remaining three recommendations of the expert peer review panel on water modeling were technical comments related to modeling activities, such as a recommendation to use simplified models that required less effort and resources. ATSDR officials said that they agreed with these technical recommendations and had subsequently used them to refine their modeling procedures. Since ATSDR began its Camp Lejeune-related work in 1991, the agency did not always receive requested funding and experienced delays in receiving information from DOD entities. Although concerns have been raised by former Camp Lejeune residents, ATSDR officials said these issues have not significantly delayed its work and that such situations are normal during the course of a study. ATSDR received funding from DOD for 13 of the 16 fiscal years during which it has conducted its Camp Lejeune-related work, and ATSDR provided its own funding for Camp Lejeune-related work during the other 3 years. Under federal law and in accordance with a memorandum of understanding between DOD and ATSDR, DOD is responsible for funding public health assessments and any follow-up public health activities such as health studies or toxicological profiles related to DOD sites as agreed to in an annual plan of work. While ATSDR conducted the health assessment at Camp Lejeune, from fiscal year 1991 to fiscal year 1996 funding was provided by DOD as part of an annual payment for all ATSDR activities at DOD sites. These annual payments were provided from Defense Environmental Restoration Program funds. In fiscal year 1997, the individual military services assumed responsibility for making these payments. Therefore, for fiscal year 1997, funding for ATSDR’s Camp Lejeune-related work came directly from the Navy (see Table 5). From fiscal year 1998 through fiscal year 2000, no funding was provided to ATSDR by the Navy or any DOD entity for its Camp Lejeune-related work because the agencies could not reach agreement about the funding for Camp Lejeune. In June 1997, ATSDR proposed conducting a study of childhood leukemia and birth defects associated with TCE and PCE exposure at Camp Lejeune during fiscal years 1998 and 1999 at an estimated cost of almost $1.8 million. In a July 1997 letter to the Navy, an ATSDR official noted that during a June meeting the Navy appeared to be reluctant to fund the proposed study; however, the official noted that DOD was liable for the costs of the study under federal law. In an October 1997 letter responding to ATSDR, a senior Navy official stated that the Navy did not believe it should be required to fund ATSDR’s proposed study because the cause of the contamination was an off-base source, ABC One Hour Cleaners. The Navy official said that it was more appropriate for ATSDR to seek funding for the study from the responsible party that caused the contamination. However, ATSDR officials told us that while they expected that the study would focus primarily on contamination from the dry cleaner, the study was also expected to include people who were exposed to on-base sources of contamination. An ATSDR official reported that the agency submitted its funding proposals for the Camp Lejeune study to DOD in each of the annual plans of work from fiscal year 1998 to fiscal year 2000, but that during that time period the agency received no DOD funding and funded its Camp Lejeune-related work from general ATSDR funding. In fiscal year 2001 the Navy resumed funding of ATSDR’s Camp Lejeune- related work. We could not determine why the Navy decided to resume funding of ATSDR’s work at that time. Beginning in fiscal year 2003, funding for ATSDR’s Camp Lejeune-related work has been provided by the Marine Corps. According to a DOD official, the Marine Corps has committed to funding the current ATSDR study. The DOD official also noted that per a supplemental budget request from ATSDR for fiscal year 2006, the Marine Corps agreed to fund community assistance panel meetings and portions of a feasibility assessment for future studies that will include computerization of Camp Lejeune housing records. ATSDR has experienced some difficulties obtaining information from Camp Lejeune and DOD officials. For example, while conducting its public health assessment in September 1994, ATSDR sent a letter to the Department of the Navy noting that ATSDR had had difficulties getting documents needed for the public health assessment from Camp Lejeune, such as Remedial Investigation documents for Camp Lejeune. The letter also noted that ATSDR had sent several requests for information and Camp Lejeune’s responses had been in most cases inadequate and no supporting documentation had been forwarded. ATSDR also had difficulty in obtaining access to DOD records while preparing to conduct its survey, the first phase of the current ATSDR health study. In October 1998, ATSDR requested assistance from the Defense Manpower Data Center, which maintains archives of DOD data, in locating residents of Camp Lejeune who gave birth between 1968 and 1985 on or off base. An official at the Defense Manpower Data Center initially did not provide the requested information because he believed that doing so could constitute a violation of the Privacy Act. Between February and April 1999, Headquarters Marine Corps facilitated discussion between ATSDR and relevant DOD entities about these Privacy Act concerns and some information was subsequently provided to ATSDR by DOD. In April 2001, Headquarters Marine Corps sent a letter to the Defense Privacy Office suggesting that the Defense Manpower Data Center had only provided a limited amount of information to ATSDR. However, in a July 2001 reply to Headquarters Marine Corps, the Defense Privacy Office noted that it believed that relevant data had been provided to ATSDR by the Defense Manpower Data Center in 1999 and 2001. In December 2005, ATSDR officials told us that they had recently learned of a substantial number of additional documents that had not been previously provided to them by Camp Lejeune officials. ATSDR then sent a letter to Headquarters Marine Corps seeking assistance in resolving outstanding issues related to delays in the provision of information and data to ATSDR. In an attachment to the letter, ATSDR provided a list of data and information needed from the Marine Corps in order to complete water modeling activities for its current study. In a January 2006 response, a Headquarters Marine Corps official noted that a comprehensive review was conducted of responses to ATSDR’s requests for information and that the Marine Corps believed it had made a full and timely disclosure of all known and available requested documents. The official also noted that while ATSDR had requested that the Marine Corps identify and provide documents that were relevant or useful to ATSDR’s study, the Marine Corps did not always have the subject matter expertise to determine the relevance of documents. The official noted that the Marine Corps would attempt to comply with this request; however, the official also noted that ATSDR was the agency with the expertise necessary to determine the relevance of documents. Despite difficulties, ATSDR officials said the agency’s Camp Lejeune- related work had not been significantly delayed or hindered by DOD. Officials said that while funding and access to records were probably slowed down and made more expensive by DOD officials’ actions, their actions did not significantly impede ATSDR’s health study efforts. The ATSDR officials also stated that while issues such as limitations in access to DOD data had to be addressed, such situations are normal during the course of a study. The officials stated that ATSDR’s progress on the study has been reasonable in light of the complexity of the project. Nonetheless, as some former residents have learned that ATSDR has not always received requested funding and information from DOD entities, they have raised questions about DOD’s commitment to supporting ATSDR’s work. For example, when some former residents learned during a community assistance panel meeting that it took about 4 months for DOD to respond to a supplemental budget request from ATSDR for fiscal year 2006, they questioned DOD entities’ commitment to ATSDR’s Camp Lejeune-related work. However, DOD and ATSDR officials described this delay in responding as typical during the funding process. Some former residents have filed tort claims and lawsuits against the federal government related to the past drinking water contamination. As of January 2007, about 750 former residents and former employees of Camp Lejeune have filed tort claims with the Department of the Navy related to the past drinking water contamination. According to an official with the U.S. Navy Judge Advocate General (JAG)—which is handling the claims on behalf of the Department of the Navy—the agency is currently maintaining a database of all claims filed. The official said that JAG is awaiting completion of the current ATSDR health study before deciding whether to settle or deny the pending claims in order to base its response on as much objective scientific and medical information as possible. As of February 2007, two of these claims had resulted in the filing of lawsuits in Federal District Courts in Texas and Mississippi. Among other things, both lawsuits seek damages for various physical ailments and emotional distress alleged to have resulted from the government’s negligence in protecting the water supply at Camp Lejeune. In the first lawsuit, a former servicemember’s son alleged that he suffered a congenital heart defect as a result of his mother’s exposure (while pregnant with him) as well as his subsequent direct exposure to contaminated water at Camp Lejeune during the early 1970s. The outcome of the lawsuit was still pending as of February 2007. In the second lawsuit, a former servicemember and his family alleged injuries as a result of their past exposure to TCE and PCE while living at Camp Lejeune. The claims of the former service member and his wife were dismissed because his alleged injuries occurred while he was on active duty in the Marine Corps. An appeal of the claims of the former service member and his family members remained pending in February 2007. Three federal inquiries into issues related to the drinking water contamination at Camp Lejeune have been conducted, each of which cited concerns by former residents as one of the reasons for conducting its inquiry. These include one by a Marine Corps-chartered panel, one by EPA’s OIG, and one by EPA’s CID. In March 2004 the Commandant of the Marine Corps created a fact-finding panel charged with conducting a review of the facts surrounding the decisions made following the 1980 discovery of VOCs in drinking water at Camp Lejeune. The panel focused its review on the 1980 to 1985 time period. The panel released a report in October 2004 which found that the Marine Corps acted responsibly and found no evidence that the Marine Corps had attempted to cover up information that indicated contamination in Camp Lejeune’s drinking water. Additionally, the panel concluded that Camp Lejeune provided residents with drinking water at a level of quality consistent with general utility practices at the time. However, the panel noted that while Camp Lejeune made every effort to comply with existing regulations, it did not anticipate or independently evaluate health risks associated with chemicals such as TCE or PCE that were not yet regulated, and for which there was developing concern about possible adverse health effects. The panel noted that this “compliance-based approach to regulations,” combined with factors including inadequate funding, staffing, and training of Camp Lejeune’s Environmental Division, contributed to a lack of understanding about the potential significance of the contamination. Additionally, the panel identified other factors that appeared to have hindered Camp Lejeune personnel from quickly recognizing the significance of VOC contamination, including the absence of regulatory standards, no records of resident complaints about water quality, sampling errors, and inconsistent sampling results. The panel also made several other findings critical of Camp Lejeune and the Department of the Navy, noting that: LANTDIV, as a technical advisory organization, was “not aggressive” in providing Camp Lejeune with the technical expertise to help base officials understand the significance of the contamination and how it could have been addressed; communications both internally among Camp Lejeune officials, and between Camp Lejeune and LANTDIV, were inadequate; and communications to Camp Lejeune residents regarding drinking water contamination were not detailed enough to completely characterize the contamination found at the time of the well closures. In January 2005 EPA’s OIG completed an internal report describing a preliminary review of five complaints reported by three citizens regarding issues indirectly or directly related to the drinking water contamination at Camp Lejeune. The complaints were as follows: 1. EPA inadequately responded to a Freedom of Information Act 2. EPA inappropriately responded to a Freedom of Information Act fee 3. EPA did not adequately perform oversight of Camp Lejeune based on its responsibilities listed in the Safe Drinking Water Act, 4. EPA did not devote adequate resources to the review that was being conducted by its Criminal Investigation Division, and 5. the 1998 study conducted by ATSDR was inadequate. The OIG conducted a preliminary review of these complaints to determine whether the complaints merited a full-scale audit of EPA activities. Regarding the first two complaints, the OIG determined that EPA’s response to a Freedom of Information Act request for documents related to Camp Lejeune contamination was inadequate and that its denial of an associated fee waiver request was inappropriate and insensitive. The third complaint was closed because the OIG concluded that EPA had little oversight responsibility for the Safe Drinking Water Act until 1996, significantly later than the contamination occurred at Camp Lejeune. The OIG found no merit with the fourth complaint, noting that although only one agent was assigned to the case, that agent had access to other agents and resources when needed. OIG officials said the fifth complaint was closed in part because they knew we would also be reviewing this concern, and also because complaints regarding ATSDR’s study are not related to any actions by EPA and are therefore outside the scope of an EPA review. Based on this preliminary review, a full audit of EPA officials’ actions was not initiated. A criminal investigation conducted by EPA and reviewed by the Department of Justice (DOJ) did not find any violations of federal law, but criticized some of the actions taken by Marine Corps and Navy officials. From 2003 through 2005, EPA’s CID conducted an investigation of allegations made by former residents that federal law was violated by the individuals and entities addressing the drinking water contamination at Camp Lejeune, including officials from the Marine Corps, Navy, and ATSDR. With regard to the Navy and Marine Corps, the CID investigated five principal allegations of violation of federal law: 1. violation of the Safe Drinking Water Act, 2. conspiracy to violate the Safe Drinking Water Act, 3. conspiracy to conceal records and prevent persons from talking with a federal agency conducting a congressionally mandated health study, 4. conspiracy to conceal Freedom of Information Act records from the 5. providing material false statements to a federal law enforcement officer. The CID concluded that in the absence of enforceable regulatory standards for both TCE and PCE between 1980 and 1985, there was no violation of the Safe Drinking Water Act at that time, and drinking water provided by Camp Lejeune during that time appeared to have met all state and federal regulatory requirements. A CID investigator told us that he looked for evidence of conspiracy from the 1980s, when the events occurred, through 2004. With regard to allegations that Marine Corps or Navy officials conspired to violate the Safe Drinking Water Act or to conceal records, the CID’s report noted that investigators were unable to substantiate that a conspiracy by military or civilian employees of either entity existed. Regarding allegations that false statements were provided to a federal law enforcement officer, investigators noted that while they were concerned that LANTDIV officials were not completely forthcoming during their interviews, there was never any direct evidence that LANTDIV officials were aware of the contamination prior to 1984. With regard to ATSDR, the CID investigated two principal allegations made by former residents of Camp Lejeune: 1. destruction of a federal agency’s records, and 2. conspiracy to improperly administer a congressionally mandated health study. Regarding an alleged order by an ATSDR official to destroy records related to the Camp Lejeune health study, CID investigators found that the records in question were never destroyed. Concerning allegations that ATSDR failed to properly address the drinking water contamination at Camp Lejeune because of influence from the Navy, the CID found no evidence that ATSDR’s scientific work was influenced by regular meetings between ATSDR and Navy officials. Although the CID found no evidence that federal law had been violated, because of the unique history and complexity of the case and an evaluation of statements from persons they interviewed, investigators noted that the case warranted a review by DOJ. Additionally, several of the allegations from the public had also been forwarded by DOJ to the CID for investigation. Following the CID’s referral of this case to DOJ for its review, DOJ discussed its findings at an August 2005 meeting with former residents and officials from the Navy and Marine Corps. DOJ concluded that it would not seek criminal prosecution, saying that the government’s investigation had concluded that no federal criminal law was broken nor was there an attempt to conceal evidence regarding a violation of any law. In addition to investigating whether federal law had been violated, the CID also investigated additional questions that were relevant to the case but were determined not to be violations of federal law. The CID noted that some of these matters appeared to have contributed to confusion, suspicion, and concern by retired Marines. Additionally, the CID commented on and criticized certain actions taken by Navy and Marine Corps officials. For example: The CID concluded that as a technical advisory agency to Camp Lejeune, LANTDIV was not diligent in providing technical expertise to the base’s environmental officials and noted that LANTDIV officials appeared to have been better suited by virtue of their training and expertise to recognize and address VOC contamination and the possible effects on public health than the environmental officials at Camp Lejeune. The CID commented that former Camp Lejeune environmental officials failed to properly investigate the contamination and determine the contamination was coming from individual wells. Until 1984, the Camp Lejeune environmental officials never sampled individual water wells and the CID noted that this was arguably their most significant lapse in judgment. Because of questions raised by Congress and former residents, the CID also investigated the provision of DOD funding for ATSDR’s work. The CID concluded that funding for the current study was apparently delayed because of opposition characterized as a professional difference of opinion as to the scientific value of the study by a midlevel manager at the Navy Environmental Health Center, and that coupled with this opposition was confusion within the Navy hierarchy regarding what entity was responsible for the contaminated wells. Regarding the provision of records and data to ATSDR by the Marine Corps, the CID found no instances when data or records were intentionally withheld or false data were provided by Marine Corps officials to ATSDR. The CID noted the Marine Corps appeared not to have recognized the complexity and degree of attention this issue required in 1997 and that prior to 1997, the Marine Corps admitted that it failed to adequately address concerns and data requests from the public and ATSDR. The seven members of an expert panel convened by the National Academy of Sciences (NAS) at our request generally agreed that specific parameters of ATSDR’s current study were appropriate, including the study population, the exposure time frame, and the selected health effects. The expert panel members had mixed opinions on ATSDR’s projected completion date. Some panel experts suggested modifying the study to use a simpler method of analysis, with alternative ways to define exposure categories, in order to complete the study sooner. Some panel experts also identified other potential modifications to the study, such as conducting separate analyses for those who were born on the base and those born off the base. (See app. VII for a more detailed description of ATSDR’s study.) The seven panel experts concurred that ATSDR logically limited its study population to those individuals who were in utero while their mothers were pregnant and lived at Camp Lejeune during the 1968 through 1985 time frame, and who may have been exposed to the contaminated drinking water. The current study follows recommendations from the agency’s 1997 public health assessment of Camp Lejeune, which noted that studies of cancer among those who were exposed in utero should be conducted to further the understanding of the health effects in this susceptible population. Panel experts said that ideally a study would attempt to include all individuals who were potentially exposed, but that limited resources and data availability were practical reasons for limiting the study population. Additionally, panel experts agreed that those exposed while in utero were an appropriate study population because they could be considered at higher risk of adverse health outcomes than others, such as those exposed as children or adults. In addition, two panel experts said that studying only those who lived on base was reasonable because they likely had a higher risk of inhalation exposure to VOCs such as TCE and PCE, which may be more potent than ingestion exposure. Thus, pregnant women who lived in areas of base housing with contaminated water and conducted activities during which they could inhale water vapor—such as bathing, showering, or washing dishes or clothing—likely faced greater exposure than those who did not live on base but worked on base in areas served by the contaminated drinking water. While supporting the decision to limit the study population to individuals who were in utero, the panel experts did not discount the possibility that children and adults who lived or worked on base may also be at risk for adverse health effects because of their potential exposure to contaminated drinking water. For example, four panel experts pointed out that exposed children and adults might have an elevated risk for neurological effects, and one of the four experts said exposed adults might have an elevated risk for certain cancers. Similarly, the ATSDR scientific advisory panel convened in February 2005 identified at least four groups of individuals at Camp Lejeune who might be at higher risk for adverse health effects because they could have been exposed to the contaminated drinking water. In addition to individuals who were in utero, these groups included children who lived on base, adults who lived on base, and adults who lived off base but worked on base, because they too spent time at Camp Lejeune and were potentially exposed to the contaminated drinking water. The seven panel experts agreed that the 1968 through 1985 study time frame was reasonable, based on limitations in data availability. This time frame was adopted from ATSDR’s 1998 study of adverse pregnancy outcomes, which limited the study population to include those potentially exposed between 1968 and 1985. According to ATSDR’s study protocol, these years were chosen because 1968 was the first year that birth certificates were computerized in North Carolina and 1985 was when the affected water wells were removed from service. Four of the panel experts said they did not see any benefit in using an earlier start date than 1968 because collecting birth records before 1968 could require a significant amount of resources to collect data. In addition, while the initial exposure to contaminated drinking water may have occurred as early as the 1950s, at the time the ATSDR study time frame was selected officials were unable to determine precisely when the contamination began. Four of the panel experts commented that exposure was likely highest in the latter part of the study time frame—presumably as a result of a higher accumulated level of contamination over time—thus making the uncertainty of when the contamination began less significant and supporting ATSDR’s decision to study the later time frame. Six of the panel experts said that extending the time frame past 1985 could help strengthen ATSDR’s study by adding an additional unexposed population for comparison. Having an additional comparison population could help researchers reinforce any conclusions about whether TCE or PCE are associated with adverse health outcomes, panel experts said. For example, if the study found some association between adverse health outcomes and the pre-1985 exposed population, but no association with an additional unexposed comparison group, it would support any finding that TCE or PCE exposure was associated with adverse health outcomes, since the exposure ended in 1985. Two of the expert panel members said that if adverse health effects continued to be found in a comparison population after 1985, that finding could mean that exposure to the contaminated drinking water was not associated with the adverse health effects. However, one of the six experts also noted that extending the study time frame would be cost effective only if a significant association between TCE or PCE exposure and adverse health outcomes was first found among those exposed before 1985. The five panel experts who discussed health effects said that those selected for the study were valid for individuals who were potentially exposed in utero at Camp Lejeune. Based on previous ATSDR work and existing literature, the health effects chosen for the study were neural tube defects, oral cleft defects, and childhood hematopoietic cancers, including leukemia and non-Hodgkin’s lymphoma. Two panel experts said that ATSDR had limited its study to health effects that are rare and that generally occur at higher levels of exposure to VOCs such as TCE and PCE than are expected to have occurred at Camp Lejeune. They said that this may result in ATSDR not identifying enough individuals with these health effects to determine meaningful results in the study. Four panel experts added that other adverse health outcomes not included in the study could also be related to exposure to drinking water contaminated with TCE or PCE, including adverse neurological or behavioral effects, or pregnancy loss. However, three of these four panel experts said that studying adverse neurological or behavioral health effects would likely be difficult because of limited access to needed records, such as school records for children, or because there might be few databases for researchers to use to study these effects in adults. ATSDR has projected a December 2007 completion date for the study, which would include activities such as identifying and enrolling study participants, conducting a parental interview, confirming each reported diagnosis, modeling the water system to quantify the amount and extent of each individual’s exposure, analyzing the data, and drafting a final report. Panel experts had mixed opinions regarding ATSDR’s completion date. Of the five panel experts who commented on the proposed completion date, three said that the date appeared reasonable, and two others said that based on the complexity of the water modeling the projected completion date might be optimistic. While none of the panel experts said that ATSDR’s projected completion date should be earlier, several said that one way to provide analytical results sooner would be to conduct the study without using the water modeling analysis. Three of the experts explained that water modeling would be useful if it improved the classification of the study participants as either exposed or unexposed to contaminated water or provided more accurate estimates of individual exposure levels, as ATSDR intends. ATSDR officials said that a precise and accurate exposure assessment would enhance the scientific credibility of a study and strengthen the study’s ability to identify any important exposure effects. But all of the panel experts raised concerns about the limited historical record of the amount of PCE or TCE concentration identified at individual Camp Lejeune wells. They said that with limited historical data there would be minimal potential for water modeling to provide accurate information about the level of concentration of the contamination and thus about each individual’s total amount of exposure. As an alternative to estimating the extent of each study individual’s exposure using the water modeling results, four panel experts suggested ATSDR could use simpler categories of whether and to what extent individuals were exposed to water contamination. These four experts said that analyzing the data on birth defects and childhood cancers by using the same exposure categories that were used in the 1998 ATSDR study could yield an effective study sooner than December 2007. The current ATSDR study expects to use more categories of exposure than were used in the 1998 study, based on data from its water modeling activities and from information gathered on the mothers’ usage and consumption of the contaminated water. Panel experts identified several other possibilities for modifying the design of the ATSDR study. Four panel experts suggested conducting separate analyses for study individuals born in the county where Camp Lejeune is located, and for individuals who were born outside the county but whose mothers were pregnant with them while living in base housing. Word of mouth among current and former residents and media campaigns were the primary methods used to identify and recruit those individuals born outside the county as study participants. According to three panel experts, the methods used to identify these study participants raise the possibility of selection bias for that group. Specifically, the experts suggested that eligible study individuals born out of county, or their parents, who had concerns about potential exposure to TCE or PCE or about existing health problems may have been more likely to sign up for the study than those who did not have these concerns. Selection bias could result in a mistaken estimate of an exposure’s effect on the risk of disease. As another potential study modification, two panel experts suggested conducting separate analyses for those with childhood leukemias and non- Hodgkin’s lymphoma, which they said ATSDR had inappropriately combined into one category of hematopoietic cancers. ATSDR study investigators had combined these health outcomes into one category following advice from the ATSDR scientific advisory panel at its meeting in February 2005. Before the February meeting, ATSDR study investigators had dropped plans to separately analyze childhood non-Hodgkin’s lymphoma because they were unable to confirm a large enough number of individuals with this type of cancer to further study this health outcome. DOD, EPA, and HHS provided technical comments on a draft of this report, which we incorporated where appropriate. We provided the seven former Camp Lejeune residents who are members of the ATSDR community assistance panel for Camp Lejeune the opportunity to provide comments on our draft—three of the panel members provided technical and general oral comments, and four declined to review the draft report. Two of the panel members said that the report should address contaminants other than TCE and PCE with potential adverse health effects, such as benzene, that were identified at Camp Lejeune. Our report focused on TCE and PCE because ATSDR’s health studies have focused on these chemicals and their associated health effects and ATSDR has identified TCE and PCE as the chemicals of primary concern at Camp Lejeune. However, in response to technical comments from ATSDR and the panel members’ comments, we have added the sampling results for all other VOCs detected in wells that were taken out of service at Camp Lejeune during 1984 and 1985. Additionally, the three members expressed the belief that the Marine Corps had not fully disclosed information related to the past drinking water contamination and two of the members expressed disappointment that our report was not more critical of the Marine Corps. We believe that we have accurately described efforts to identify and address the past contamination and described activities resulting from concerns about possible adverse health effects and government actions related to the past contamination. Finally, the three members raised various other issues, such as compensation and health benefits for former residents and their families and the need for additional notification to be provided to former residents regarding the past drinking water contamination; however, these issues were beyond the scope of this report. We are sending copies of this report to the Secretary of Defense, the Administrator of EPA, the Secretary of Health and Human Services, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7119. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions are listed in appendix VIII. To examine efforts to identify and address the past drinking water contamination at Camp Lejeune, we obtained and reviewed more than 1,600 documents related to past and current drinking water activities at Camp Lejeune. We focused our review on the past trichloroethylene (TCE) and tetrachloroethylene (PCE) contamination at Camp Lejeune because the Agency for Toxic Substances and Disease Registry (ATSDR) had noted that these chemicals were the VOCs of primary concern. However, we also reviewed documentation regarding other volatile organic compounds (VOCs) detected at Camp Lejeune. The documents we reviewed were obtained from Headquarters Marine Corps and had been collected and organized by a contractor for the Commandant of the Marine Corps’ Drinking Water Fact-Finding Panel for Camp Lejeune. Documents related to past and current drinking water activities were also obtained during a visit to Camp Lejeune. The authors of the documents we collected included officials with Camp Lejeune, Headquarters Marine Corps, the Department of the Navy, other federal agencies such as the Environmental Protection Agency (EPA), the state of North Carolina, and private laboratories. The types of documents that were collected included results of laboratory analyses of drinking water samples, e-mails, memorandums, letters, reports, site maps, federal and state regulations, press releases, and newspaper articles. Additionally, we reviewed a list of more than 6,000 historical documents collected by a contractor hired by Headquarters Marine Corps; this list was compiled by the contractor and included detailed descriptions and dates of the historical documents. We requested and reviewed more than 100 documents from this list that we thought might be relevant to the past drinking water contamination. We interviewed 39 current and former officials from various Department of Defense (DOD) entities, including Camp Lejeune, Headquarters Marine Corps, and the Department of the Navy, who were involved in activities related to or knowledgeable about historical environmental activities at Camp Lejeune. The former officials we interviewed were responsible for environmental activities at Camp Lejeune or the Department of the Navy during the time in which the contamination was detected. The current officials we interviewed are responsible for environmental activities at Camp Lejeune, Headquarters Marine Corps, or the Department of the Navy. Some of these current officials were also responsible for environmental activities during the time in which the contamination was detected. The current and former officials interviewed often provided information based on their memory of events which occurred more than 20 years ago. We attempted to corroborate their testimonial evidence with documentation whenever possible. We also met with 19 interested former residents and individuals who worked on the base during the 1960s, 1970s, and 1980s in order to obtain their perspective on historical events. A former resident who is active in matters related to the past drinking water contamination at Camp Lejeune identified most of the interested former residents; others were identified at an ATSDR public meeting. We also interviewed current Camp Lejeune housing officials in order to obtain estimated historical occupancy rates, including the limitations of the occupancy data that were provided. Additionally, we examined reports from and interviewed current officials from Camp Lejeune, EPA, and the North Carolina Department of Environment and Natural Resources who were involved with or knowledgeable about past and current activities and costs related to the cleanup of the suspected sources of contamination. Finally, we obtained and analyzed information from ATSDR and EPA on drinking water contaminated with TCE and PCE, the possible adverse health effects related to exposure to these chemicals, and relevant federal regulations for TCE and PCE. To describe activities resulting from concerns about the possible adverse health effects and government actions related to past drinking water contamination, including efforts to study potential health effects and federal inquiries into the response to the contamination, we reviewed documents, interviewed agency officials, and attended agency meetings. To examine the activities undertaken by ATSDR to study potential health effects related to the drinking water contamination at Camp Lejeune, we reviewed the agency’s 1997 Public Health Assessment that evaluated the risks of adverse health effects from exposure to the contaminated drinking water, as well as released documents regarding ATSDR’s 1998 health study of the association between exposure to TCE and PCE in drinking water at Camp Lejeune and a variety of adverse pregnancy outcomes. We did not evaluate the methodology or findings of the public health assessment or health study. For ATSDR’s current study, we examined the study protocol, a progress report, and other documents describing ATSDR’s current study examining whether birth defects and childhood cancers are associated with exposure to TCE or PCE at Camp Lejeune. We interviewed ATSDR officials involved with the Public Health Assessment, the 1998 study, and the current study, and also attended ATSDR expert panel meetings convened to evaluate and provide recommendations regarding the agency’s work related to Camp Lejeune. In order to examine the sources of and issues surrounding funding for ATSDR’s Camp Lejeune-related work, we obtained documents from and interviewed officials with ATSDR, the Department of the Navy, and the U.S. Army Center for Health Promotion and Preventive Medicine, which currently executes the memorandum of understanding between DOD and ATSDR and negotiates an annual plan of work with ATSDR. We examined documentation and interviewed DOD, ATSDR, and EPA officials about efforts to address the concerns of the former Camp Lejeune residents. To examine the recommendations of additional review panels convened by ATSDR in 2005 regarding improving the study’s water modeling efforts and future studies of health effects, we attended two panel meetings and obtained and reviewed the final reports of both panels, which included ATSDR’s response to the panels’ recommendations. To determine the actions taken by ATSDR to address the panel recommendations, we interviewed relevant ATSDR officials and observed and subsequently reviewed transcripts of meetings of the Camp Lejeune community assistance panel held in 2006, where ATSDR officials reported on their activities. In order to describe the lawsuits and tort claims filed against the federal government for injuries alleged to have resulted from exposure to the contaminated drinking water at Camp Lejeune, we interviewed officials with the Department of the Navy’s Judge Advocate General and the Department of Justice. To describe three federal inquiries into issues related to the drinking water contamination at Camp Lejeune, we reviewed the reports and statements of the Drinking Water Fact-Finding Panel for Camp Lejeune, the EPA Office of Inspector General, the EPA Criminal Investigation Division, and the Department of Justice. We also interviewed officials from the EPA Office of Inspector General and the EPA Criminal Investigation Division about their examinations of allegations made by former residents. We did not evaluate the methodology used by the officials who conducted these three inquiries. When the source of evidence we cited is from an interview, we identified the respondent’s agency and noted whether the individual was a current or former official. Whenever possible, we reviewed documents to verify testimonial evidence from DOD and ATSDR officials. When this was not possible, we attempted to corroborate testimonial evidence by interviewing multiple individuals about the information we obtained. To assess the design of the current study by ATSDR on the possible health effects associated with the contaminated drinking water at Camp Lejeune, including the study population, time frame, health effects, and completion date, we contracted with the National Academy of Sciences (NAS) to convene a 1-day meeting of scientific experts in the areas of drinking water contamination, hydrologic modeling, and reproductive health. We identified for NAS the categories of expertise preferred at the meeting and expressed a preference that each participant have no conflict of interest with ATSDR, DOD, or EPA. NAS identified participants according to the preferred categories. Once we concurred with the proposed participants, NAS contacted the potential participants to determine interest and availability to participate in the meeting. In total, seven experts and one moderator participated in the meeting. The experts and the moderator had combined research expertise in environmental engineering; reproductive, environmental, and occupational epidemiology; statistics and modeling; public health investigations, risk assessment, and decision analysis; geochemistry; and water and wastewater treatment and water modeling. We observed the meeting, which took place in July 2005, and subsequently reviewed the written transcript of the meeting. The experts’ discussion during the meeting was guided by a set of questions we prepared regarding the ATSDR study population, time frame, health effects, and completion date. Participants were invited as individual experts, not as organizational representatives, and were not asked to reach consensus on any topics. NAS was not asked to provide advice or produce any report, and the comments made during the meeting of the expert panel should not be interpreted to represent the views of NAS or of all experts regarding health studies related to drinking water contamination. As we requested, each of the experts also provided written responses to the set of questions that were discussed during the meeting. During the meeting and in their written responses, not all panel members commented individually about each of the questions discussed during the 1-day meeting. Additionally, some panel members noted that certain questions addressed subjects that were outside their areas of expertise. In addition to convening and attending the expert panel meeting, we also reviewed ATSDR documents related to the current study, including the study protocol and progress reports, and interviewed ATSDR officials involved in the study’s epidemiologic and water modeling activities. We conducted our work from May 2005 through April 2007 in accordance with generally accepted government auditing standards. Appendix II: Selected Events Related to Past Drinking Water Contamination at Camp Lejeune from 1980 through 1981 An official with the Naval Facilities Engineering Command, Atlantic Division (LANTDIV), collected samples from all eight water systems at Camp Lejeune to be combined into a single sample and analyzed in order to detect any potential contaminants in the water systems. At the direction of LANTDIV, Camp Lejeune collected separate samples to be analyzed for total trihalomethanes (TTHMs) at two base water systems, Hadnot Point and New River. LANTDIV arranged for the U.S. Army Environmental Hygiene Agency (USAEHA) laboratory to conduct the testing. A LANTDIV-contracted private laboratory reported results from the samples collected on October 1, 1980, from all eight water systems at Camp Lejeune. The results, sent to LANTDIV, indicated that 11 volatile organic compounds (VOCs) were detected, including trichloroethylene (TCE). All VOCs detected in this analysis were identified at their detection limits, which were the lowest level at which the chemicals could be reliably identified by the instruments being used. A report from USAEHA of the results of the analysis of samples collected on October 21, 1980, contained a USAEHA official’s handwritten notes which indicated unidentified chlorinated hydrocarbons were interfering with the testing for TTHMs at the Hadnot Point water system. Handwritten notes from a USAEHA official on a USAEHA report indicated that continued interference with the TTHM analysis of samples collected on December 29, 1980, for the Hadnot Point water system, and recommended conducting analyses for chlorinated organics. Handwritten notes from a USAEHA official on a USAEHA report indicated continued interference with the TTHM analysis of samples collected on January 30, 1980, for the Hadnot Point water system, and recommended conducting analyses for chlorinated organics. Handwritten notes from a USAEHA official on a USAEHA report indicated that water samples collected on March 9, 1981, for analysis for TTHMs at the Hadnot Point water system were “highly contaminated” with other chlorinated hydrocarbons. According to the private laboratory report sent to LANTDIV, an analysis of water samples collected on March 30, 1981, from areas surrounding the Camp Lejeune Rifle Range chemical dump detected VOCs. However, TCE and tetrachloroethylene (PCE) were not among the VOCs detected in these samples. According to the private laboratory report sent to LANTDIV, an analysis of water samples collected on April 10, 1981, was conducted from the untreated water in the wells that served the Rifle Range water system, from treated water from the Rifle Range water system, and from areas surrounding the Rifle Range chemical dump. VOCs, including TCE and PCE, were detected in water samples from the areas surrounding the chemical dump. VOCs, including TCE, were also detected in the well samples. TCE was detected at 1.8 parts per billion in one of the well samples. The Commander of LANTDIV wrote a memorandum to the Commanding General of Camp Lejeune that recommended resampling the Rifle Range area because of variation in the results from the April 7 and April 16 analysis reports. LANTDIV noted that three contaminants were detected in the treated and untreated water in the Rifle Range water system. Two of these contaminants, methylene chloride and TCE, were not regulated and the third chemical, a TTHM, was detected at levels within the new regulatory standards. The LANTDIV official noted that no imminent threat to human health was presented by consumption of water from the Rifle Range water system. According to the private laboratory report sent to LANTDIV, an analysis of water samples collected on May 20, 1981, from treated water in the Rifle Range water system and from areas surrounding the Rifle Range chemical dump detected VOCs in the treated water at the Rifle Range water system and also detected VOCs, including TCE, in areas surrounding the Rifle Range chemical dump. The Commander of LANTDIV wrote a memorandum to the Commanding General of Camp Lejeune that described the analyses of the additional water samples taken from the Rifle Range area. The official noted that of the organic contaminants detected at the Rifle Range area, only one, a TTHM, had an established regulation with a maximum contaminant level though it did not apply to the Rifle Range water system because this system did not serve more than 10,000 people. The official noted that LANTDIV would add the Rifle Range water system to the TTHM testing that had been initiated in 1980. Additionally, he suggested no further action be taken until the Navy Assessment and Control of Installation Pollutants program and TTHM analysis provided additional data. According to a handwritten note at the end of the memorandum, an environmental official at Camp Lejeune recommended arranging a meeting with the state in order to share these results. The Commander of LANTDIV wrote a memorandum to the Commanding General of Camp Lejeune noting that in accordance with Camp Lejeune’s request, it was providing the summary of TTHM regulations and copies of the TTHM testing reports for the two water systems that met the requirement to be tested. 0 to identify, assess, and control environmental contamination from past hazardous materials storage, transfer, processing, and disposal operations. A private laboratory contracted by Camp Lejeune to conduct the TTHM analysis informed Camp Lejeune by telephone that synthetic organic cleaning solvents, including trichloroethylene (TCE), were detected in the samples that were collected from April 19 to April 22, 1982, from the Tarawa Terrace and Hadnot Point water systems. Grainger Laboratory stated that TCE interference with the analysis of the Hadnot Point samples prevented the detection of a precise reading for TTHMs. Camp Lejeune environmental officials took a second set of monthly water samples at the base water systems because of problems with the collection of earlier samples taken from May 17 through May 24, 1982. The private laboratory report of the results of the analysis of monthly samples collected May 27 and May 28, 1982, noted that an unknown compound was interfering with the testing for TTHMs at the Hadnot Point water system. The private laboratory report of the results of the analysis of monthly samples collected June 24 and June 25, 1982, did not specifically note interference with the testing for TTHMs at the Hadnot Point water system, but, as in previous reports, noted that there was some uncertainty in the measurements for this water system. Camp Lejeune environmental officials collected samples, which were in addition to the monthly samples, from the Hadnot Point and Tarawa Terrace water systems. An internal Camp Lejeune memorandum noted that the additional sampling was conducted because the private laboratory identified interference by TCE and another synthetic organic cleaning solvent while analyzing earlier samples from the Hadnot Point and Tarawa Terrace water systems for TTHMs. The private laboratory sent a letter to Camp Lejeune officials stating that the contaminants interfering with the TTHM monitoring at the Tarawa Terrace and Hadnot Point water systems were TCE and tetrachloroethylene (PCE). The laboratory noted that these chemicals appeared to be at high levels and were thus more important from a health standpoint than the TTHM levels. The laboratory further noted that the levels of PCE detected in the Tarawa Terrace water system had been relatively stable over the time period examined, while levels of TCE and PCE detected in the Hadnot Point water system had varied, and the most recent Hadnot Point readings had been at significantly lower levels than the levels detected in May. Camp Lejeune officials decided to reduce monitoring for TTHMs from monthly to quarterly for six of the eight water systems, including Tarawa Terrace and Hadnot Point, beginning in September 1982. Officials noted in a memorandum that federal and state regulations required only quarterly sampling. A Camp Lejeune environmental official sent a memorandum to her supervisor that discussed the TTHM sampling and interference at the Tarawa Terrace and Hadnot Point water systems. She explained that the additional samples had been collected on July 28, 1982, to identify the source of the interference in the earlier TTHM testing; TCE and PCE were identified as the interfering chemicals. The official detailed the possible adverse health effects from both TCE and PCE, but further explained that TCE and PCE were not regulated under the Safe Drinking Water Act. However, she noted that the EPA had issued “suggested no adverse response levels” and “suggested action guidance,” which provided some guidance on unregulated contaminants. The official explained that levels of TCE and PCE detected in the Hadnot Point water system were presently within the limits suggested by the suggested no adverse response levels, but she offered no explanation for the higher level detected in samples taken in May 1982 and analyzed in July 1982. She also noted that it was possible that the levels of PCE detected in the Tarawa Terrace water system were the result of the use of asbestos-coated pipe in the water lines carrying untreated water. The private laboratory report of the results of the analysis of samples collected in November from all eight water systems for quarterly TTHM testing was provided to Camp Lejeune officials. This report stated that all samples from Tarawa Terrace indicated contamination from PCE and all samples from Hadnot Point indicated contamination from TCE and PCE. An environmental official at Camp Lejeune wrote a memorandum to her supervisor about the TTHM analysis from November 1982. She noted that during a telephone conversation with a chemist at the private laboratory, the chemist had expressed concerns over the solvents that interfered with the Tarawa Terrace and Hadnot Point samples, particularly those from Hadnot Point. According to the memorandum, the chemist told the Camp Lejeune official that while the levels of TCE and PCE had dropped for a period of time, the November samples showed levels of TCE and PCE that were relatively high again. The private laboratory report of the results of the analysis of samples collected on August 25 and August 26, 1983, from all eight water systems for TTHM testing was provided to Camp Lejeune officials. The report stated that all samples from Tarawa Terrace exhibited contamination from PCE and all samples from Hadnot Point exhibited contamination from both TCE and PCE. uired that water systems serving more than 10,000 people and adding a disinfectant as part of the drinking water treatment process to begin mandatory water testing for TTHMs by November 192 and comply with the maximum contaminant level by November 193. Only two water systems at Camp Lejeune, Hadnot Point and New River, served more than 10,000 people when TTHM testing was initiated at Camp Lejeune. 0. Concentrations of chemicals in parts per billion 602 Nov. 30, 1984 601 Dec. 6, 1984 608 Dec. 6, 1984 634 Dec. 14, 1984 637 Dec. 14, 1984 4 and 195 and are one-time sampling results. We did not find documentation that tied the decision to remove the wells from service to any particular level of contamination included in related Environmental Protection Agency (EPA) guidance or enforceable regulation. uid used as a solvent for waxes and resins; in the extraction of rubber; as a refrigerant; in the manufacture of pharmaceuticals and artificial pearls; in the extraction of oils and fats from fish and meat; and in making other organics. EPA has found trans-1,2-DCE to potentially cause central nervous system depression when people are exposed to it at levels above 100 parts per billion for relatively short periods of time. Trans-1,2- DCE has the potential to cause liver, circulatory, and nervous system damage from long-term exposure at levels above 100 parts per billion. uid with a mild, sweet, chloroform-like odor. Virtually all of it is used in making adhesives, synthetic fibers, refrigerants, food packaging, and coating resins. EPA has found 1,1-DCE to potentially cause liver damage when people are exposed to it at levels above 7 parts per billion for relatively short periods of time. 1,1-DCE has the potential to cause liver and kidney damage, as well as toxicity to the developing fetus, and cancer from a lifetime exposure at levels above 7 parts per billion. uid which occurs naturally in crude oil and in the tolu tree. It is also produced in the process of making gasoline and other fuels from crude oil and making coke from coal. Toluene may affect the nervous system. Low to moderate levels can cause tiredness, confusion, weakness, drunken-type actions, memory loss, nausea, loss of appetite, and hearing and color vision loss. Inhaling high levels of toluene in a short time can result in feelings of light-headedness, dizziness, or sleepiness. It can also cause unconsciousness, and even death. High levels of toluene may affect kidneys. Studies in humans and animals generally indicate that toluene does not cause cancer. Well TT-23 is also referred to as “TT-new well” in Marine Corps documents. Appendix V: Selected Events Related to Past Drinking Water Contamination at Camp Lejeune from 1984 through 1985 Camp Lejeune initiated the Navy Assessment and Control of Installation Pollutants (NACIP) confirmation study. The purpose of the confirmation study was to further investigate potential contamination at 22 priority sites at Camp Lejeune that were identified in an initial assessment study. As part of the confirmation study, sampling began at any well in the vicinity of a priority site where groundwater contamination was suspected. Prior water samples at Camp Lejeune had usually been drawn at the water treatment plants or in the distribution system—not from individual wells. Camp Lejeune officials received results from the confirmation study sampling which detected trichloroethylene (TCE) and tetrachloroethylene (PCE), among other volatile organic compounds (VOC), at a well serving the Hadnot Point water system, one of eight water systems at Camp Lejeune. This well was removed from service. Water samples were collected from six Hadnot Point wells and from the untreated and treated water at the Hadnot Point water treatment plant. These wells were sampled because of their proximity to the contaminated well that was removed from service on November 30, 1984. Camp Lejeune officials received results of the analysis of samples collected on December 4, 1984, that indicated three additional wells and the untreated and treated water from the Hadnot Point water system had levels of TCE and PCE, among other VOCs. In one of the wells, TCE was detected at 210 parts per billion (ppb) and PCE was detected at 5 ppb. In the second well, TCE was detected at 110 ppb. In the third well, TCE was detected at 4.6 ppb. The first two wells were removed from service. A Camp Lejeune official contacted a North Carolina state environmental official by telephone to discuss suspected contamination found in wells, untreated water, and treated water from the Hadnot Point water system. The Camp Lejeune official explained Camp Lejeune anticipated that a resampling program would be initiated, and indicated that some form of information might be released to the public. Samples were again collected from the same seven Hadnot Point wells and the treated water at the Hadnot Point water treatment plant. Separately, daily samples were collected from the untreated water at the Hadnot Point water treatment plant. The base newspaper published its first article about water testing, VOC contamination, and corrective actions taken by base officials, including removing wells from service. The article did not identify TCE or PCE as the VOC contaminants. Camp Lejeune officials received results of the analysis of samples collected on December 10, 1984, that indicated two additional wells in the Hadnot Point water system had significant levels of a VOC, methylene chloride, while a third well also indicated levels of methylene chloride. TCE and PCE were not detected in these wells. Two of these three wells were removed from service. Camp Lejeune officials received the results of the analysis of samples that were collected from December 13 to December 19, 1984, at the Hadnot Point water treatment plant. TCE and PCE were not detected in these samples. The director of the NACIP program at Camp Lejeune received a report reviewing the December 1984 sampling of wells, untreated water, and treated water at the Hadnot Point water system. In the report, sampling of all the wells and the water treatment plants at Camp Lejeune was proposed. Samples were collected at all wells serving the Hadnot Point and Holcomb Boulevard water systems to be tested for VOCs. Samples were collected at all wells serving four other water systems, including Tarawa Terrace, to be tested for VOCs. A fuel line from Holcomb Boulevard water treatment plant leaked fuel into the water system. The Holcomb Boulevard water treatment plant was subsequently shut down and water from the Hadnot Point water system was pumped into the Holcomb Boulevard water lines. Samples were collected at various locations within the Hadnot Point and Holcomb Boulevard water systems for analysis required by North Carolina prior to restarting the Holcomb Boulevard water treatment plant. Camp Lejeune officials received results of the analysis of the samples collected on January 16, 1985, that indicated one additional well in the Hadnot Point water system had significant levels of TCE and PCE, among other VOCs. TCE was detected at 3,200 ppb and PCE was detected at 386 ppb. This well was removed from service. The results also noted that trace amounts of TCE were detected in two other Hadnot Point wells. In one well, TCE was detected at 9 ppb and in the other well TCE was detected at 5.5 ppb. Camp Lejeune officials received results of the analysis of the samples collected on January 31, 1985, from various locations within the Hadnot Point and Holcomb Boulevard water systems. No gasoline was detected in samples from Holcomb Boulevard. However, various levels of TCE were detected in all of the samples; TCE was detected at levels ranging from 24 ppb to 1,148 ppb. The Holcomb Boulevard water treatment plant was restarted. Camp Lejeune officials received results of the analysis of the samples collected on January 23, 1985, that indicated that two wells in the Tarawa Terrace water system had levels of TCE and PCE. In one well, TCE was detected at 57 ppb and PCE was detected at 158 ppb. In the other well, TCE was detected at 5.8 ppb and PCE was detected at 132 ppb. The two wells in the Tarawa Terrace water system that were found to be contaminated with TCE and PCE on February 7, 1985, were removed from service. Additionally, the two wells in the Hadnot Point water system that were found to be contaminated with trace levels of TCE and PCE on February 4, 1985, were removed from service. According to an internal Camp Lejeune memorandum, one of the wells removed from service on February 8, 1985, was restarted on March 11, 1985, after samples were taken. After 24 hours of operation, additional samples were taken and the well was removed from service. The Commanding General of Camp Lejeune issued a notice to the residents of Tarawa Terrace housing area regarding problems with the water supply. According to the notice, two of the wells that supplied water to the Tarawa Terrace water system were taken off line because “minute (trace)” amounts of several organic chemicals were detected in the water. The notice stated that there were no regulations regarding safe levels of the organic chemicals found in these wells, but as a precaution the Commanding General had ordered the wells to be removed from service in all but emergency situations. Additionally, the notice provided ways for residents to reduce water usage because of concerns that a water shortage might result following the removal of these wells from service. An article was published in the base newspaper explaining that 10 wells that served the Tarawa Terrace and Hadnot Point water systems were removed from service because of contamination. The article also noted the potential for water shortages in the Tarawa Terrace water system and included information about how to conserve water. An article was published in a North Carolina newspaper providing similar information as that included in the May 9, 1985, base newspaper article regarding the contamination in the Tarawa Terrace and Hadnot Point water systems. An article was published in a second North Carolina newspaper providing similar information as that included in the May 9, 1985, base newspaper article regarding the contamination in the Tarawa Terrace and Hadnot Point water systems. Camp Lejeune officials sent a memorandum to Headquarters Marine Corps and LANTDIV noting that all 10 contaminated wells remained out of service, although 1 of the contaminated wells at Tarawa Terrace had been used on April 22, 23, and 29 to maintain water production. An article was published in a third North Carolina newspaper that provided similar information as that included in the May 9, 1985, base newspaper article regarding the contamination in the Tarawa Terrace and Hadnot Point water systems. Agency for Toxic Substances and Disease Registry’s (ATSDR) response 1. Create an advisory panel to oversee health studies related to Volatile Organic Chemical (VOC) exposures at Camp Lejeune. Agreed. ATSDR will create a community assistance panel (CAP) comparable to other panels it has set up for community participation at National Priorities List sites. ATSDR recommended that its Camp Lejeune CAP be comprised of five or more community members and one or two scientific advisers, along with ex officio members from the Navy. Agreed. ATSDR said it considered interaction with the community an important aspect of its on-site work and planned to continue to work closely with organized community advocacy groups. It agreed to be responsive to recommendations from the CAP. 3. Establish a registry to identify groups of potentially exposed individuals to study, including exposed and unexposed individuals who had lived and/or worked at Camp Lejeune during the period of interest, which would serve as the population base for further studies. Agreed. In order to identify various distinct groups of individuals with potential exposure, ATSDR said that efforts or activities should be conducted to determine if potential databases exist that would identify these groups, such as children who lived on base and adults who lived or worked on base. However, the agency said that it believed that it had already identified as completely as possible those who may have been exposed while in utero for the years 1968-1985. 4. Conduct various types of feasibility or pilot studies—to determine whether study individuals can be identified and tracked and what types of medical records are available— before embarking on full-scale studies of the impact on health of exposures at Camp Lejeune. Agreed. ATSDR will conduct a feasibility assessment to determine the number of adults and children that could be identified through available data sources. 5. Study additional health outcomes, such as mortality and cancer incidence. Also, conduct feasibility studies of other adverse health outcomes, such as autoimmune diseases; spontaneous abortion; neurological effects; organ failure; adult heart disease; reproductive outcomes of male and female children who were born (or were in utero) at Camp Lejeune; birth defects beyond those considered by ATSDR; and ocular problems. Agreed. ATSDR agreed that mortality and cancer incidence should receive the highest priority and are the outcomes most feasible to study. The agency said that decisions concerning study period, study population, and study outcomes should be made in consultation with the CAP, and said that ATSDR would defer decisions about additional health studies until feasibility studies were completed and reviewed by the CAP. 6. Conduct future research activities in parallel with the current study and without awaiting completion of current ATSDR activities. Agreed. The agency said that its highest priority is to complete the current study. Development of a CAP and further research activities would likely require additional staffing and resources, which ATSDR said it would request from the Department of Defense (DOD). 7. Amend the 1997 public health assessment to include the possibility that adult cancers and other adverse health outcomes may be related to VOC exposures. Additionally, in the period since release of the original public health assessment, much additional information on exposures at Camp Lejeune and their potential risks has been developed, and this additional material should be incorporated into an amended document. Did not agree. ATSDR said revisions to the assessment would be needed only if new information changed the assessment’s conclusions or recommendations. ATSDR noted that its assessment acknowledged that the science was inconclusive and did not rule out the possibility of cancerous health effects from low-dose exposure to VOCs. Did not respond directly. ATSDR indicated that it would work with the CAP to determine effective ways to disseminate information about its current study and any future health studies. Agency for Toxic Substances and Disease Registry’s (ATSDR) response 9. Obtain future funding for Camp Lejeune health studies through direct congressional appropriation, not through DOD’s budget, to avoid even the appearance of a conflict of interest. Did not agree. ATSDR said it recognized that the affected community had some distrust of ATSDR and DOD, and said that the CAP was intended to help mitigate this distrust. However, ATSDR suggested that DOD is the most likely funding source for these research activities because no other funds are available outside those budgeted to complete the current study. ATSDR is conducting a study of the potential health effects of exposure while in utero and as infants up to 1 year of age to trichloroethylene (TCE) and tetrachloroethylene (PCE)—two volatile organic chemicals found in drinking water at Marine Corps Base Camp Lejeune in the 1980s. ATSDR’s study will analyze whether exposure to the TCE or PCE-contaminated drinking water at Camp Lejeune before birth is associated with increased risks of specific birth defects or childhood cancers. These birth defects include (1) neural tube defects, (2) oral cleft defects, and (3) childhood leukemias and non-Hodgkin’s lymphoma, which have been combined into one category of hematopoietic cancers. ATSDR’s efforts to conduct this study began in 1999 with a telephone survey conducted with parents of 12,598 individuals born to women who were pregnant with them while living in on-base housing at Camp Lejeune any time from 1968 through 1985. Parents were asked if their child had a birth defect or developed a childhood cancer, along with other questions such as those to confirm residency on base during the specific time period and questions regarding water usage. A total of 106 potential cases of the childhood cancers or birth defects were reported by the interviewed parents. ATSDR reviewed health records in order to verify the reported health problems and had confirmed 57 cases of the childhood cancers or birth defects as of June 2006. (See table 6.) The study population includes the 57 individuals with confirmed health problems and 548 comparison individuals chosen randomly from among the remaining individuals identified in the survey. As part of this study, ATSDR officials are also conducting computer modeling of the drinking water system at Camp Lejeune from 1968 through 1985 in order to determine which pregnant women were probably exposed to the contaminated drinking water and to estimate their levels of exposure. ATSDR’s drinking water distribution system model is based on current and historical information for the base water system as well as historical information on the sources of the contamination. The results of the model are intended to establish whether the mothers of the individuals with the birth defects or childhood cancers were more likely to have been exposed during their pregnancy to the drinking water contaminants than were the mothers of the comparison individuals. ATSDR officials said they did not expect to finalize exposure categories for the current study until February or March 2007, after most water modeling activities were completed, but noted that they would use the water modeling results to assign multiple exposure levels to each study participant. Additionally, data gathered from the survey about the mothers’ drinking water and other home water use activities, such as dishwashing, clothes washing, and bathing, will be combined with the estimated exposures levels to create another exposure measure. ATSDR officials also said the current study will analyze results for individuals who were exposed to TCE separately from those exposed to PCE and will analyze cancer and each type of birth defect separately. The study is expected to be completed by December 2007. In addition to the contact named above, Bonnie Anderson, Assistant Director; Karen Doran, Assistant Director; George Bogart; Helen Desaulniers; Cathleen Hamann; Danielle Organek; Roseanne Price; Christina Ritchie; and Stuart Ryba made key contributions to this report. | In the early 1980s, volatile organic compounds (VOCs) were discovered in some of the water systems serving housing areas on Marine Corps Base Camp Lejeune. Exposure to certain VOCs may cause adverse health effects, including cancer. In 1999, the Department of Health and Human Services' (HHS) Agency for Toxic Substances and Disease Registry (ATSDR) began a study to examine whether individuals who were exposed in utero to the contaminated drinking water are more likely to have developed certain childhood cancers or birth defects. ATSDR has projected a December 2007 completion date for the study. The National Defense Authorization Act of Fiscal Year 2005 required GAO to report on past drinking water contamination and related health effects at Camp Lejeune. In this report GAO describes (1) efforts to identify and address the past contamination, (2) activities resulting from concerns about possible adverse health effects and government actions related to the past contamination, and (3) the design of the current ATSDR study, including the study's population, time frame, selected health effects, and the reasonableness of the projected completion date. GAO reviewed documents, interviewed officials and former residents, and contracted with the National Academy of Sciences to convene an expert panel to assess the design of the current ATSDR study. Efforts to identify and address the past drinking water contamination at Camp Lejeune began in the 1980s, when Navy water testing at Camp Lejeune detected VOCs in some base water systems. In 1982 and 1983, continued testing identified two VOCs--trichloroethylene (TCE), a metal degreaser, and tetrachloroethylene (PCE), a dry cleaning solvent--in two water systems that served base housing areas, Hadnot Point and Tarawa Terrace. In 1984 and 1985 a Navy environmental program identified VOCs, such as TCE and PCE, in some of the individual wells serving the Hadnot Point and Tarawa Terrace water systems. Ten wells were subsequently removed from service. Department of Defense (DOD) and North Carolina officials concluded that on- and off-base sources were likely to have caused the contamination. It has not been determined when contamination at Hadnot Point began. ATSDR has estimated that well contamination at Tarawa Terrace from an off-base dry cleaner began as early as 1957. Activities related to concerns about possible adverse health effects began in 1991, when ATSDR initiated a public health assessment evaluating the possible health risks from exposure to the contaminated drinking water. The health assessment was followed by two health studies, one of which is ongoing. While ATSDR did not always receive requested funding and experienced delays in receiving information from DOD for its Camp Lejeune-related work, ATSDR officials said this has not significantly delayed their work. Former residents and employees have filed about 750 claims against the federal government. Additionally, three federal inquiries into issues related to the contamination have been conducted--one by a Marine Corps-chartered panel and two by the Environmental Protection Agency (EPA). Members of the expert panel that the National Academy of Sciences convened generally agreed that many parameters of ATSDR's current study are appropriate, including the study population, the exposure time frame, and the selected health effects. ATSDR's study is examining whether individuals who were exposed in utero to the contaminated drinking water at Camp Lejeune between 1968 and 1985 were more likely to have specific birth defects or childhood cancers than those not exposed. DOD, EPA, and HHS provided technical comments on a draft of this report, which GAO incorporated where appropriate. Three members of an ATSDR community assistance panel for Camp Lejeune provided oral comments on issues such as other VOCs that have been detected at Camp Lejeune, and compensation, health benefits, and additional notification for former residents. GAO focused its review on TCE and PCE because they were identified by ATSDR as the chemicals of primary concern. GAO's report notes that other VOCs were detected. GAO incorporated the panel members' comments where appropriate, but some issues were beyond the scope of this report. |
DOD’s programs for acquiring major weapon systems have taken longer, cost more, and delivered fewer quantities and capabilities than planned. We have documented these problems for decades. Most recently, we reported that 27 major weapon programs we have assessed since they began product development have experienced cost increases of nearly 34 percent over their original research, development, test, and evaluation (RDT&E) estimates, and increases of almost 24 percent in acquisition cycle time (see table 1). When cost and schedule problems occur in one program, DOD often attempts to pay for the poorly performing program by taking funds from others. Doing so has destabilized other programs and reduced the overall buying power of the defense dollar as DOD and the military services are forced to cut back on planned quantities or capabilities to stay within budget limitations. The F-22A Raptor program is a case in point: As costs escalated in the program, the number of aircraft the Air Force planned to buy was drastically reduced from 648 to 183. Similarly, as the Joint Tactical Radio System (JTRS) encountered development problems, the number of requirements was reduced or deferred by about one-third. As a result, several programs that were dependent on JTRS also had to make adjustments and go forward with alternative, less capable solutions. DOD’s approach to managing weapon system investments ultimately results in less funding being available for other competing needs in DOD as well as other federal priorities, as the expenditure of tax dollars within DOD reduces the amount of funding available for those priorities. Taking into account the differences between commercial product development and weapons acquisitions, we have recommended that DOD adopt a knowledge-based, incremental approach to developing and producing weapon systems. This type of an approach requires program officials to demonstrate that critical technologies are mature, product designs are stable, and production processes are in control at key junctures in the acquisition process. DOD has three major processes involved in making weapon system investment decisions. These processes, depicted in figure 1, are the Joint Capabilities Integration and Development System (JCIDS), for identifying warfighting needs; the Planning, Programming, Budgeting and Execution (PPBE) system, for allocating resources; and the Defense Acquisition System (DAS), for managing product development and procurement. Much of our prior work has focused on identifying commercial best practices that could be used to improve the Defense Acquisition System— from the point just before product development starts onward. In this report, however, we look at earlier stages in DOD’s investment process— from the point where gaps in warfighting capability are assessed in JCIDS through the point where alternative solutions to resolve those gaps are analyzed under the DAS (see fig. 1). To ensure they achieve a balanced mix of executable development programs, the successful commercial companies we reviewed use a disciplined and integrated approach to prioritize market needs and allocate resources. This approach, known as portfolio management, requires companies to view each of their investments from an enterprise level as contributing to the collective whole, rather than as independent and unrelated. With this enterprise viewpoint, companies can effectively (1) identify and prioritize market opportunities and (2) apply available resources to potential products to select the best mix of products to exploit the highest priority—or most promising—opportunities. Ultimately, each of the companies we reviewed seeks to achieve a balanced portfolio that maximizes the return on investments and moves the company toward achieving its strategic goals and objectives. This type of approach depends on strong governance with committed leadership, clearly aligned responsibility, and effective accountability at all levels of the organization. As depicted in figure 2, a portfolio management approach begins with an enterprise-level identification and definition of market opportunities and then the prioritization of those opportunities within resource constraints. Once opportunities have been prioritized, companies draft initial business cases for alternative product ideas that could be developed to exploit each of the highest priority opportunities. Each alternative product proposal— represented by a black dot—enters a gated review process. At each review gate, product proposals are assessed against corporate resources, established criteria, competing products, and the goals and objectives of the company as a whole. As alternatives pass through each review gate, the number is expected to decrease, until only those alternatives with the greatest potential to succeed make it into the product portfolio. To make informed decisions about what market opportunities to target, the companies we reviewed first establish a strategy that lays out the overall goals, objectives, and direction for the company. As part of their strategy, companies identify enterprise-level sales and profit targets, strategic business areas they want to focus on, the extent to which current products and new development efforts will support their growth objectives, and how they will allocate resources across business units and functional areas. This strategy provides a framework for the companies’ investment decisions. Within this framework, companies conduct a series of market analyses to develop a comprehensive understanding of the market environment, including product trends, technology trends, and customer needs. IBM for example, follows a structured market planning process to identify, prioritize, and target attractive market segments. The first phase of this process, called Market Definition, focuses on understanding the marketplace, including identifying potential customers and their needs. During this phase, IBM examines the marketplace and technology environments and identifies attractive market segments that contain potential market opportunities—where customer wants or needs exist. Each segment is categorized into one of four areas based on needs of the customers and the company’s product offerings (see fig. 3): “strike zone,” “traditional,” “pushing the envelope,” and “white space.” The strike zone represents IBM’s core business—market segments where IBM has an established customer base that it is successfully serving with existing product offerings. In contrast, white space represents market segments of new customers with wants and needs that are new and different for IBM. White space opportunities often require discovery and innovation. The traditional and pushing the envelope areas fall between the strike zone and white space. Traditional opportunities exist when new customers could be attracted to an existing market—one IBM is already active in—by modifying or enhancing existing products or services. Pushing the envelope opportunities exist where the needs of current customer groups move them into a new market segment. These attractive market segments are prioritized during the next phase of IBM’s process, known as the Capability Assessment phase. During this phase each segment’s overall attractiveness and potential profitability are assessed, along with IBM’s available resources—like capital, cash, and current products—and its competitive position within each segment. This analysis leads to the selection of targeted market segments. Motorola emphasizes the importance of targeting the right market segments at the enterprise level to ensure that a balanced mix of project and resource investments is maintained. Officials noted that excessively focusing on segments that require new and innovative products can result in long cycle times, wasted money, and lost opportunities elsewhere. Likewise, critical opportunities can be lost when too much emphasis is placed on simply continuing to invest in old markets with old products. According to the officials we spoke with, the current investment mix for Motorola’s Government and Enterprise Mobility Solutions business unit is roughly 70-20-10, where 70 percent of its projects and resources are dedicated to maintaining its core business, while 20 percent are invested in pursuing new markets with existing products or introducing new or enhanced products into existing markets, and the remaining 10 percent are dedicated to discovering new markets and new products. As part of their market analyses, companies increasingly refine their understanding of who their customers are and what they need. For several of the companies we met with, determining the needs of their customers is complex because they have multiple groups of customers to consider. For example, Eli Lilly has four customer groups with diverse needs: patients, doctors, insurance companies, and government regulators. This complexity is compounded when considering that success in a worldwide market is critically dependent on a company’s ability to operate within different governmental systems, laws, and regulations; and regional markets. Several of the companies we reviewed use a variety of methods— including interviews, surveys, focus groups, and concept tests—to actively engage their customers and help determine what they need. Some companies also observe customer behaviors to identify unstated wants and needs that if met—assuming corporate knowledge and resources allow—could actually exceed customer expectations. While companies actively seek customer input to identify products that show the most promise and satisfy customer needs, customers generally do not identify specific products to be developed. Once companies have identified and prioritized their market opportunities, they follow a disciplined process to assess the costs, benefits, and risks of potential product alternatives and allocate resources to achieve a balanced portfolio that spreads risk across products, aligns with the company’s strategic goals and objectives, and maximizes the company’s return on investment. At an early stage, each alternative product is expected to be accompanied by an initial business case that contains knowledge-based information on strategic relevance and estimates of cost, technology maturity, and the cycle time for getting the product from concept to market. To ensure comparability across alternatives, companies require initial business case information to be developed in a transparent manner, to use specific standards, and to report estimates within certain levels of confidence or allowable deviations. Each of the companies we reviewed also stressed the importance of having multiple management review points, or gates, at early phases to assess and prioritize alternative products. As products move through review gates, from ideas, to more concrete concepts, to the start of development where a final business case is made, companies expect uncertainties—which are typically inherent in the early phases—to be addressed and estimates to become more precise. Consequently, the number of viable alternatives tends to decrease at each review gate as those with the lowest potential for success and least value are terminated or deferred, while those that are poised to succeed and providing the best value are approved to proceed (illustrated in fig. 2). Companies emphasized that making tough go/no-go decisions is critical to keeping a balanced portfolio. Over time, as potential new products are identified, companies review them against other product investments (proposed and existing) and rebalance their portfolios based on those that add the most value. The companies we visited each follow a disciplined, gated review process to ensure that they commit to development programs that help balance the portfolio and that are executable given available corporate resources. This allows companies to avoid committing to more programs than their resources can support and ensure stability in the programs they invest in. Although the number of review gates prior to the start of full-scale product development varied between companies—ranging from four at Procter & Gamble, to eight at Motorola—they all required potential products to follow an established, disciplined process and meet specified criteria at each review point. For example, Caterpillar assesses product alternatives at four review gates prior to the start of development—three of which were recently added to enhance the rigor of its investment decision making. Each alternative must be supported by a draft business case that includes quantifiable data that can be compared with specific standards and used to determine if the related product can move past that gate. At each gate, alternatives are reviewed to ensure that knowledge about critical technologies, life-cycle costs, product reliability, and product affordability is being acquired and that the product contributes to achieving the company’s strategic goals and objectives. Because developing a new drug is costly and time consuming, Eli Lilly requires that the data supporting potential new drugs must meet high standards to ensure that managers are informed to make sound investment decisions. Each potential new drug must be supported by an initial business case that contains information about safety and efficacy; forecasted revenue; expected unit demand; capital, medical, supply and material, development, and selling and marketing expenses; and general administrative costs. The initial business case must also identify critical success factors, state the probability of technical success, and provide a timeline that details when major milestone events are expected and how long it will take to get the associated drug to market. Eli Lilly assesses, approves, and funds proposed new drugs incrementally. At each milestone review a contract is established between the project team and a gatekeeper committee, which contains deliverables, time frames, and the costs to get to the next milestone. Once this contractual agreement is reached the budget is allocated for the entire phase. The gatekeeper committees expect each new drug proposal to achieve an 80-percent confidence level in their cost and schedule estimates for the next phase. This high level of confidence is achievable in large part because final budget estimates are not developed by project teams until 2 months prior to the milestone review. Projects are terminated at early points in the review process when it is determined that their critical success factors cannot be achieved. Because Eli Lilly’s projects typically have a high degree of technical risk, only about 1 percent of those that start early development actually make it to the marketplace. Motorola officials also emphasized the importance of having sound information when assessing potential new products. They noted that a process without sound information will not produce good outcomes. Therefore, Motorola’s Government and Enterprise Mobility Solutions business unit expects potential products to be supported by initial business cases containing data that meet specific standards and levels of confidence at each review gate. For example, cost estimates for potential products are developed in several phases and are expected to increase in confidence with each successive phase. Early in the investment planning, when an initial business case is first drafted, the confidence parameters are generous, ranging from as much as 75 percent higher to as much as 25 percent lower than what the project will likely end up costing. By the time a product alternative reaches the beginning of product development, when a final business case is made, Motorola expects the cost estimates to be at confidence levels of 10 percent higher and 5 percent lower. Proposed products that fail to meet the specified criteria at early review gates are either terminated or sent back to further mature and reenter the review process from the beginning. The companies we reviewed use a variety of portfolio management tools and methods to inform the investment and resource allocation decisions they make at each review gate. Some companies employ scoring methods, using experts to rate products based on a number of factors—such as strategic fit, risk, and economic value—and use that information to prioritize alternative products. Another common tool plots alternative products on a decision matrix that compares factors such as costs and benefits, or risks and rewards of competing alternatives. Using this type of matrix, alternative products are often represented by circles, where the size of the circles provides information about key constraints such as available annual resources or the estimated annual costs for each alternative. For example, figure 4 compares risk and expected rewards by plotting competing alternatives on a matrix. Alternatives that fall into the upper left quadrant are high risk and low reward, while alternatives that fall into the lower right quadrant are low risk and high reward. By weighing risk against rewards and considering constraints such as annual resources or annual cost, this tool provides critical information and a structured means to help managers make informed decisions. Company officials at Procter & Gamble emphasized the importance of selecting a balanced mix of products to pursue. They noted that pursuing only low- risk and high-reward products at the expense of more innovative, higher- risk products could cause the company to miss out on opportunities to improve their competitive standing in the marketplace. Likewise, excessive pursuit of higher-risk products with the potential for high returns could also result in lost opportunities to elsewhere. Recognizing the inherent risks in pursuing a new development program— that overruns or underruns in one business case result in lost opportunity to invest resources in another worthwhile project—IBM permits products to deviate from their original business case estimates as long as the deviation is within established limits. These limits are specified in a contractual document resulting from negotiations between senior management and project managers and signed at the beginning of product development. Product development teams are expected to execute according to the contract; if established thresholds are breached, action is taken immediately to reassess the product within the context of the portfolio and determine whether it is still a relevant and affordable investment to pursue. Successful portfolio management requires strong governance with committed leadership that empowers portfolio managers to make decisions about the best way to invest resources and holds those managers accountable for the outcomes they achieve. The companies we reviewed indicated that it is critical to have commitment from the top leaders of the organization and recognition at all levels that what is best for the company must be a priority, and not simply what is best for a particular business unit or product line. In addition, the companies emphasized that roles and responsibilities for implementing portfolio management, including the designation of who is responsible for product investment decisions and oversight, must be clearly defined. Because portfolio managers are on the front line, the companies we reviewed empower these managers to make product investment decisions and hold them accountable for outcomes, not just for individual products but also for the overall performance of their portfolios. To support their portfolio managers, the companies encourage collaboration and communication, including sharing bad news early. Several companies also emphasized the importance of supporting their portfolio managers with cross-functional teams, composed of representatives from the key functional areas within the company—such as science and technology, marketing, engineering, and finance—to ensure that they are adequately informed when making investment decisions. To ensure accountability, companies often use incentives and disincentives, including promotion and termination. We have previously reported that high-performing organizations have monetary and other rewards that clearly link employee knowledge, skills, and contributions to achieving the organization’s goals and objectives. These organizations underscore the importance of holding individuals accountable and aligning performance expectations with organizational goals and cascade those expectations down to lower levels. Companies stressed that the transformation to portfolio management takes time and requires not only process changes but cultural changes throughout the company. Eli Lilly emphasized that a key to making its portfolio management process work is having a single committee with a high-level official in charge responsible for making product investment decisions. Previously, the company had a multi-layered committee structure in place, and decisions were made based on reaching a consensus—an approach that was viewed as cumbersome and lengthy. Eli Lilly also ensures accountability by directly linking management and employee bonuses to the overall success of the company. Individual employee performance objectives are aligned with specific company objectives, such as meeting budgetary goals, time frames, and data quality levels for a given project. Achievement of individual employee objectives is measured periodically to provide feedback to the employee. Eli Lilly officials stressed that having the right performance metrics in place is important because ultimately you get what you measure; therefore, be sure to measure the right things. Motorola considers accountability to be the critical factor in making its portfolio management process successful. In addition, Motorola’s culture is not averse to reporting bad news to management. Project managers are encouraged to report problems early so that they can be addressed before they get out of control. Senior managers, however, are not intimately involved in the day-to-day decision making for individual products. That responsibility, in nearly every case, is delegated to the business unit general manager. The general manager of a business unit is held accountable for ensuring that the products within his unit succeed at all levels. The general manager is responsible for holding product managers accountable for the attainment of critical knowledge at key points and the performance of their individual products overall. General managers and product managers can be fired for not meeting objectives. Motorola believes that if managers are held accountable for results, then they have more desire to get it right. Although the military services fight together on the battlefield as a joint force, they do not identify warfighting needs and make weapon system investment decisions together. DOD has taken steps to identify warfighting needs through a more joint requirements process, but the department’s service-centric structure and fragmented decision-making processes are at odds with the integrated, portfolio management approach used by successful commercial companies to make enterprise-level investment decisions. Consequently, DOD has less assurance that its weapon system investment decisions address its most important warfighting needs and are affordable in the context of its overall fiscal resources. In addition, DOD commits to products earlier than the companies we reviewed and with far less knowledge about their cost and feasibility. This leads to poor program outcomes and funding instability, as the department attempts to fix troubled programs by taking funds from others. Although recent DOD policy emphasizes a more joint approach to identifying and prioritizing warfighting needs, DOD’s service-centric structure and fragmented decision-making processes hinder the policy’s successful implementation. This policy, which introduced the JCIDS process, calls for a wider range of stakeholders than before, including more customer (i.e., combatant command) involvement; introduces new methodologies intended to foster jointness; and groups warfighting needs into eight functional areas based on warfighting capabilities—such as netcentric, force application, and battlespace awareness—that cut across the military services and defense agencies. The JCIDS process emphasizes early attention to the fiscal implications of newly identified needs, including identifying ways to pay for new capabilities by divesting the department of lower priority or redundant capabilities. Despite these provisions, assessments of warfighting needs continue to be driven by the services and to be based on investment decision-making processes that do not function together to ensure that DOD pursues needs that its resources can support. The military services identify warfighting needs individually, and department-level organizations are not optimized to integrate the services’ results or evaluate their fiscal implications early on. Historically, this approach has contributed to duplication in weapon systems and equipment that does not interoperate. At the department level, Functional Capability Boards oversee each of the eight functional areas, reviewing the services’ assessments, and providing recommendations to the Joint Requirements Oversight Council (JROC), which leads the JCIDS process. However, defense experts and DOD officials report that the Functional Capability Boards do not have the staff or analytical resources required to effectively evaluate service assessments within the context of the broader capability portfolio and assess whether the department can afford to address a particular capability gap. Several recent studies have recommended that DOD increase joint analytical resources for a less stovepiped understanding of warfighting needs. In addition, the boards lack the authority to allocate resources and to make or enforce decisions to divest their capability area of existing programs to pay for new ones— authority successful companies provide to their portfolio managers. Finally, some defense experts contend that the service ties of JROC’s members—that is, the services’ Vice Chiefs and the Assistant Commandant of the Marine Corps—reinforce service stovepipes. To better ensure a more joint perspective, they recommend a more diverse JROC, with representatives from other department-level organizations and the combatant commands. Resource allocation decisions are made through a separate process—the Planning, Programming, Budgeting, and Execution system (PPBE)—which hinders the department’s ability to weigh the relative costs, benefits, and risks of investing in new weapon systems early on. Within the PPBE system, the individual military services are responsible for budgeting and allocating resources under authority that is commonly understood to be based on Title 10 of the United States Code. PPBE is structured by military service and defense program, although the department integrates data on the services’ current and projected budget requests under 11 crosscutting mission areas called Major Force Programs. The cross- cutting view provided by the Major Force Program structure is intended to facilitate a strategic basis for resource allocation, allowing the Secretary of Defense to more easily see where the greatest mission needs are and to re-allocate funds to meet those needs regardless of which service stands to gain or lose. However, we have reported in the past that the Major Force Program structure has not provided sufficient visibility in certain mission areas. Moreover, although they cut across the services, the program mission areas are not consistent with the more recently established capability areas used in the JCIDS process, and as a result, it is difficult to relate resources to capabilities. For example, in prior work, we observed that the Major Force Programs contain large numbers of programs with varied capabilities, complicating comparisons needed to understand defense capabilities and associated trade-off decisions. We have recommended that DOD report funding levels for defense capabilities in its Future Years Defense Program report to the Congress, which is currently organized by the Major Force Programs. In addition, our analysis of DOD’s investment accounts—which pay for developing, testing, and buying weapon systems and other equipment— indicates that DOD generally does not allocate resources on a strategic basis. Figure 5 illustrates that the service allocations as a percentage of the department’s overall investment budget have remained relatively static for the 25-year period we examined, even though DOD’s strategic environment and warfighting needs have changed dramatically during that time, with the demise of the cold war and the emergence of the global war on terror. In contrast, successful commercial companies using portfolio management would expect to see their resource allocations across business areas to reflect changes in the marketplace and the competitive environment. PPBE and JCIDS are led by different organizations (see fig. 6), as is the third of the three processes involved in DOD’s weapon system investment decisions, the Defense Acquisition System (DAS), making it difficult to hold any one person or organization accountable for investment outcomes. The 2006 Quadrennial Defense Review highlighted the need for governance reforms, and a 2006 study commissioned by DOD observed that the budget, acquisition, and requirements processes are not connected organizationally at any level below the Deputy Secretary of Defense, concluding that this structure induces instability and erodes accountability. The Under Secretary of Defense/Acquisitions, Technology, and Logistics (USD/AT&L) has stated that weapon system investment decisions are a shared responsibility, and, therefore, no one individual is accountable for these decisions. At a broader, strategic level, we have stated in prior work that DOD has lacked sustained leadership and accountability for various department-wide management reform efforts, including the establishment of an effective risk management approach as a framework for decision making. This approach would link strategic goals to plans and budgets, assess the value and risks of various courses of action as a tool for setting investment priorities and allocating resources at the department level, and use performance measures to assess outcomes. To address the lack of sustained leadership, we have supported legislation to create a chief management official at DOD. OSD (Policy, PA&E, and Comptroller) responsibility Planning, Programming, Budgeting, and Execution System (calendar-driven) JCS (JROC) — Joint Chief of Sff (Joint Requirement Overight Concil) OSD (Policy, PA&E, nd Comptroller) — OSD (Policy, Progrm Anly nd Evuation, nd Comptroller) OSD (AT&L) — OSD (Acquition, Technology nd Logitic) The Office of the Secretary of Defense (OSD) does not assess the funding implications of a proposed program at the front end of the investment process, when it is initially validated by JROC. JCIDS is a continuous, need-driven process that unfolds in response to warfighting needs as they are identified. However, PPBE is a calendar-driven process comprised of phases that occur over a 2-year cycle, thus OSD’s formal review of a proposed program is not often synchronized with JROC’s, and can occur several years later. Nevertheless, according to Joint Staff and AT&L officials we met with, proposed programs begin to gain momentum when they are validated by JROC, and they become very difficult to stop. These officials indicated that momentum begins to gather because the services start programming and budgeting for the proposed capability right away to secure funding, generally several years before actual product development begins and before OSD formally reviews the services’ programming and budgeting proposals. In the interim, the services have not only budgeted for their proposed programs, but established a program office, conducted their Analysis of Alternatives, and identified specific user requirements. OSD’s programming and budgeting review occurs at the back end of the investment process, when it is difficult and disruptive to make changes, such as terminating existing programs to pay for new, higher priority programs. These practices have contributed to the department starting more programs than its resources can support. DOD defers much of the additional cost of its programs into the future, resulting in what some have characterized as a fiscal bow wave (illustrated in fig. 7). This bow wave has grown at a pace that greatly exceeds DOD’s annual funding increases. The cost remaining for DOD’s major weapons programs increased almost 135 percent between 1992 and 2006, while the department’s annual funding level only increased 57 percent over that same time period. If this trend goes unchecked, Congress will likely be faced with a difficult choice: pull funds from other high-priority federal programs to support DOD’s acquisitions or accept less warfighting capability than originally promised. DOD commits to a solution to address a warfighting need earlier in the investment process than commercial companies do and before it has adequate knowledge about cost and technical feasibility. Proposed options for resolving a gap in military capability are submitted in an Initial Capabilities Document (ICD). DOD guidance states that this document should contain a range of approaches based in part on the cost and technological feasibility posed by the approaches, laying the foundation for a more detailed Analysis of Alternatives to be conducted under the Defense Acquisition System. In addition, JROC is to receive a briefing on the ICD that follows a standard format and addresses such issues as linkage of the proposal to strategic guidance; the time frame within which the capability is needed; the threat/operational environment; risks and assumptions (including the risk associated with proceeding and not proceeding with solutions to each); and a description of the best materiel and non-materiel approaches based upon cost, efficacy, performance, technology maturity, and risk. Although DOD guidance calls for the analysis of a solution’s cost and feasibility, we found that ICDs contained little of this type of information. Several DOD officials we met with, who are directly involved in the JCIDS process, did not believe cost and feasibility information was mandated at this point. In our review of 14 unclassified ICDs approved by JROC from 2003-06, we found that 11 did not contain acquisition cost estimates and 12 did not contain estimates of the technical feasibility of proposed solutions. We also found that JCIDS guidance does not specify the level of accuracy sought in cost and feasibility estimates, and a white paper that does provide recommendations in this regard is advisory. We found that ICDs generally focused on the strategic, or operational, relevance of proposed solutions, but a lack of guidance and an evolving methodology have raised questions about the accuracy of data supporting those assessments. JCIDS uses new joint warfighting concepts to translate top-level military strategy into the capabilities a commander might need on the battlefield. The joint concepts underpin a capabilities- based approach to identifying requirements, in which analyses are expected to focus on broad military capabilities rather than service- specific platforms. However, the joint concepts and capabilities-based assessments are works in progress. The concepts are being updated due to concerns about their scope, and guidance on conducting a capabilities- based analysis has been lacking. Several DOD officials we met with stated that assessments vary in their rigor, and a senior Joint Staff official said that training on requirements development is one of three central challenges at present. In January 2007, we reported that DOD officials described concerns about the analytical framework for a capabilities- based assessment on joint seabasing, which could lead to inaccurately identifying gaps in implementing the concept. Enhancing a seabasing capability is expected to be costly and could be the source of billions of dollars of investment if DOD chooses an option involving the development of new ships. DOD does not consistently follow a disciplined review process to ensure that proposed solutions are making progress toward an executable development program, although DOD policy emphasizes that such reviews are necessary. DOD’s policy identifies several key decision points prior to starting a new weapon system development program: an initial decision point, where the Initial Capabilities Document is reviewed, validated, and approved by the JROC; a Concept Decision review, where entry into the concept refinement phase of the Defense Acquisition System should be authorized; and a Milestone A decision point, where a preferred solution and a technology development strategy should be reviewed and approved. Since Initial Capabilities Documents generally do not contain information on cost and technical feasibility, the JROC does not have a sufficient basis for making go/no-go decisions at the initial decision point. In the 4 years since JCIDS was implemented, nearly all of the warfighting needs identified by the services and submitted for review in an ICD have been validated and sent into the acquisition pipeline for further analysis as potential programs, which calls into question whether go/no-go decisions are the point of this first key gate. Information on cost and feasibility is generally developed after the ICD is approved and proposed solutions undergo further refinement through an Analysis of Alternatives (AOA). An AOA should compare alternative solutions in terms of life-cycle cost, schedule, and operational effectiveness, leading up to the identification of a preferred alternative. However, officials from PA&E and the Joint Staff indicate that AOAs often make a case for a single preferred solution. Several of them indicated other concerns about AOAs, such as not setting up trade-off discussions, lack of analytical rigor, length, and timeliness. In any case, the next review points—the Concept Decision and Milestone A—are often skipped; thus, the opportunity to review an evolving business case and to make go/no-go decisions is bypassed. In prior work, we found that 80 percent of the programs we reviewed entered the Defense Acquisition System at Milestone B without holding any prior major reviews, such as a Milestone A review. Such reviews are intended to provide acquisition officials with an opportunity to assess whether program officials had the knowledge needed to develop an executable business case. Senior officials with OSD confirmed that this is a common practice among defense acquisition programs. We concluded that this practice eliminates a key opportunity for decision makers to assess the early product knowledge needed to establish a business case that is based on realistic cost, schedule, and performance expectations. In addition, we found that programs are regularly approved to begin development even though officials reported levels of knowledge below the criteria suggested in DOD’s acquisition policy. There is, then, generally little department-level oversight between the point at which an ICD is approved and when system-level requirements are validated and product development is initiated. At this point, as we indicated earlier, there is generally no turning back, because the services have invested considerable time and money, established a budget, and formed a constituency for a proposed program, and decision makers become reluctant to terminate a program or send it back for further study. In response to the 2006 Quadrennial Defense Review and other recent acquisition reform studies, DOD has undertaken several key, interrelated initiatives intended to strengthen the department’s approach to investment decisions. The initiatives include (1) taking a new approach to reviewing proposed concepts that will provide decision makers with an early opportunity to evaluate trade-offs among alternative approaches to meeting a capability need, (2) testing portfolio management approaches in selected capability areas to facilitate more strategic choices about how to allocate resources across programs, and (3) using capital budgeting as a potential means to stabilize program funding. While promising, these initiatives do not fundamentally change DOD’s existing service-centric framework for making weapon system investment decisions. To address a perceived gap between DOD’s major decision-making processes and provide a department-level means to assess potential solutions (materiel and non-materiel) to fill a validated capability need, DOD is testing a new approach to a Concept Decision review, which will take place after a warfighting need is validated by the JROC. This new approach is intended to focus attention on the affordability and feasibility of potential solutions and generate early cost, schedule, and performance trade-offs prior to the point of a significant investment commitment. As currently proposed, the Concept Decision will be informed by a newly required Evaluation of Alternatives that will integrate the Functional Solutions Analysis conducted under JCIDS with the Analysis of Alternatives conducted under the acquisition system and lay out the relative merits and limitations of potential solutions. Furthermore, concept decision reviews will be implemented by a tri-chair board consisting of lead decision makers from the JCIDS, PPBE, and DAS processes. While promising, the Concept Decision review largely reinstitutes a review point that already existed but was only intermittingly used. For Concept Decision reviews to be effective, DOD will have to establish enforcement and accountability mechanisms to ensure the reviews are actually implemented. In addition, the extent to which the concept reviews can achieve desired effects will depend on what authority Concept Decisions carry and who will be held accountable, particularly in light of the service- dominated investment structure that currently exists. The department has also begun to pilot-test capability-based portfolio management, selecting four joint capability areas to focus on—joint command and control, joint net-centric operations, battlespace awareness, and joint logistics. The intent is to enable the department to develop and manage capabilities, as opposed to simply individual programs, and enhance the integration and interoperability within and across sets of capabilities. Each portfolio is being structured somewhat differently to help the department determine how best to proceed with portfolio management. All, however, are intended to focus initially on existing programs and to operate within DOD’s existing decision-making framework. The portfolios are largely advisory and will, as a first step, provide input to decisions made through the JCIDS, PPBE, and DAS processes. At this point, the capability portfolio managers have not been given direct authority to manage fiscal resources and make investment decisions. Without portfolios in which managers have authority and control over resources, DOD is at risk of continuing to develop and acquire systems in a stovepiped manner and of not knowing whether its systems are being developed within available resources. DOD is also examining the use of capital accounts as a potential means of stabilizing program funding, which has long been cited as a significant issue in program management. This capital budgeting pilot initiative is in the early stages of planning, and the specifics of how such accounts will be implemented are being developed, but the intent is for DOD to commit a set amount of funding for the development portion of a project and hold to that commitment by not adjusting funding up or down until the product is delivered. In addition to resource constraints, programs would be given a fixed amount of time to get from one milestone to the next. If successful, this initiative could represent a step toward stabilizing long-term costs within major defense acquisition programs, as well as a strengthening of the ability of program managers to conduct long-term planning and control costs. However, for this initiative to be effective, DOD will need to overcome long-standing problems it has had in starting programs without sufficient knowledge of the costs, requirements, and technologies needed to develop proposed weapon systems. Unless this changes, it is unlikely that capital accounts will lead to increased program stability. While DOD has increasingly strengthened its ability to operate as a joint force on the battlefield, the department’s organizational structures, processes, and practices for planning and acquiring weapon systems are not similarly joint. Put simply, DOD largely continues to base its investment decisions on service-driven analyses that do not provide an enterprise-level understanding of overall warfighting needs and on individual platforms rather than broader sets of capabilities. In contrast, successful commercial companies use an integrated portfolio management approach to focus early investment decisions on products collectively at an enterprise level and to ensure there is a sound basis to justify the commitment of resources. By following a disciplined, integrated process— where the relative pros and cons of market opportunities and competing product proposals are assessed based on available resources and customer needs, and where tough decisions about which investments to pursue are made—companies are able to reduce duplication between business units, move away from organizational stovepipes, and effectively support each new development program they commit to. Until DOD takes a joint, portfolio management approach to weapon system acquisition— with functionally aligned entities that have the requisite responsibility, authority, and control over resources—it will continue to struggle to effectively prioritize warfighting needs, make informed trade-offs, and achieve a balanced mix of weapon systems that are affordable, feasible, and provide the best military value to the warfighter. Committing to more programs than the budget can support and approving programs based on insufficient knowledge to effectively manage risks will further delay providing critical capabilities to the warfighter and lead to lost opportunities to address other current and emerging needs. We recommend that the Secretary of Defense implement an enterprise- wide portfolio management approach to making weapon system investments that integrates the assessment and determination of warfighting needs with available resources and cuts across the services by functional or capability area. To ensure the success of such an approach, the Secretary should establish a single point of accountability at the department level with the authority, responsibility, and tools to ensure that portfolio management for weapon system investments is effectively implemented across the department. In addition, the Secretary should ensure that the following commercial best practices, identified in this report, are incorporated: implement a review process in which needs and resources are integrated early and in which resources are committed incrementally based on the achievement of specific levels of knowledge at established decision points; prioritize programs based on the relative costs, benefits, and risks of each investment to ensure a balanced portfolio; require increasingly precise cost, schedule, and performance information for each alternative that meets specified levels of confidence and allowable deviations at each decision point leading up to the start of product development; establish portfolio managers who are empowered to prioritize needs, make early go/no-go decisions about alternative solutions, and allocate resources within fiscal constraints; and hold officials at all levels accountable for achieving and maintaining a balanced, joint portfolio of weapon system investments that meet the needs of the warfighter within resource constraints. We also recommend that the Secretary take steps to support department- level decision makers and portfolio managers by developing a stronger joint analytical capability to assess and prioritize warfighting needs. DOD provided us with written comments on a draft of this report. The comments appear in appendix II. DOD concurred with the majority of our recommendations and partially concurred with two. Generally, in responding to these recommendations, DOD stated that it is undertaking several initiatives and pilot efforts to improve the department’s approach to investment and program decision making, and that implementation of any new business rules will be contingent upon the outcome of these initiatives. The department also stated that it is experimenting with portfolio management, related authorities and organizational constructs, and integrated decision-making processes. We believe that these initiatives and pilot efforts may be steps in the right direction, but we are concerned that they do not go far enough to address the systemic cultural and structural problems identified in this report. DOD has attempted many similar acquisition reform efforts over the past 3 decades, including significant revisions to both defense requirements and acquisition policy. However, despite these efforts, weapon system acquisition programs continued to experience cost overruns, schedule slips, and performance shortfalls. The department’s current initiatives are likely to face the same fate because they do not fundamentally change DOD’s service-centric framework or sufficiently integrate its decision- making processes for making weapon system investments. DOD did not provide comments regarding our recommendation that the Secretary establish a single point of accountability at the department level with the authority, responsibility, and tools to ensure that portfolio management for weapon system investments is effectively implemented across the department. We believe that a single point of accountability is necessary to successfully implement a portfolio management approach and integrate DOD’s fragmented decision-making processes under one senior official who is accountable for weapon system investment outcomes. We further believe that our recommendations would better position DOD to make tough, knowledge-based choices among potential weapon system investments. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; and the Director of the Office of Management and Budget. We will provide copies to others on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please call me at (202) 512-4841 (sullivanm@gao.gov). Key contributors to this report were John Oppenheim, Assistant Director; Lily Chin; John Krump; Matthew Lea; Travis Masters; Sean Seales; Karen Sloan; Susan Woodward; and Rebecca Yurman. This report examines the Department of Defense’s (DOD) requirements identification and resource allocation processes for major weapons systems. The primary focus is on identifying successful private-sector principles and practices that could be adopted by DOD to help improve stability in weapon system acquisition programs. Specifically, our objectives were to (1) identify best practices of successful commercial companies for ensuring that they pursue the right mix of programs to meet the needs of their customers within resource constraints and (2) compare DOD’s enterprise-level processes for investing in weapon systems to those practices. Our work was conducted between March 2006 and February 2007, in accordance with generally accepted government auditing standards. We analyzed the outputs of DOD’s investment decision-making support processes—the requirements determination process known as JCIDS and the resource allocation process known as PPBE—using criteria established in DOD policy and in previous GAO reports. We identified impacts of the existing processes by analyzing quantitative and qualitative data on DOD spending trends, conducting interviews with DOD officials, and reviewing previous reports by GAO and by other knowledgeable audit and research organizations. In addition, we met with officials representing the Office of the Secretary of Defense, Joint Staff, and military services. At each of these locations, we conducted interviews that helped us describe the current condition of DOD’s requirements identification and resource allocation processes. We also reviewed DOD and military service policies and funding documents pertaining to the DOD requirements identification process and resource allocation decisions for major weapons systems. Specifically, we reviewed the contents of 14 unclassified Initial Capability Documents that were finalized after June 24, 2003—the publication date for the JCIDS instruction—to assess the extent to which they contained cost and technical feasibility information. Those 14 ICDs were unclassified, weapon system-related ACAT I, II, or III ICDs that were contained in the Joint Staff requirements database. We relied on previous GAO reports that highlight both the symptoms and causes of unstable requirements and funding in DOD weapons acquisition programs. A list of these reports can be found at the end of this report. In addition, we reviewed recent key studies and reports addressing acquisition reform issues by the Center for Strategic International Studies, Institute for Defense Analysis, the U.S. Naval War College, the Defense Acquisition Performance Assessment Project, the Joint Defense Capabilities Study Team, the Joint C4ISR Decision Support Center, the Defense Science Board, and the 2001 and 2005 Quadrennial Defense Reviews. We also reviewed pertinent literature from authoritative corporate, academic, and professional organizations, to identify commercial best practices and processes that could be used by DOD to improve its weapon system investment decision-making processes. In addition we conducted case studies of five leading commercial companies. In selecting them, we sought to identify companies that were recognized in the literature for best practices, had large and diversified portfolios of products, and make significant investments in the development and production of new products. For each of the companies, we interviewed management officials knowledgeable about their requirements identification and resource allocation activities, to gather consistent information about processes, practices, and metrics the companies use to help achieve successful product development outcomes. Below are descriptions of the five companies featured in this report: Motorola is a Fortune 100 global communications leader that provides seamless mobility products and solutions across broadband, embedded systems, and wireless networks. According to Motorola’s 2005 Corporate Profile, the company is the market leader in mission critical wireless communication systems, two-way radios, embedded telematics systems, digital set-top shipments, cable modem shipments, digital head-ends, embedded computer systems for communication applications, CDMA infrastructure sales (excluding the United States), and second in world wide wireless handsets. Motorola achieved net sales of $31.323 billion and spent $3.060 billion on research and development in 2004. The corporation has approximately 68,000 employees, in 320 facilities, spanning 73 countries. We met with the management of Motorola’s Government & Enterprise Mobility Solutions and Global Telecom Solutions sectors in Schaumburg, Illinois. International Business Machines (IBM) IBM is one of the world’s largest technological companies, spending about $3 billion annually on research and development activities. It is the largest supplier of hardware, software, and information technology services. With 3,248 U.S. patents, IBM earned more patents than any other company for the 12th consecutive year in 2004. In the past 4 years, IBM inventors received more than 13,000 patents—approximately 5,400 more than any other patent recipient. IBM has over 329,000 employees worldwide. We met with managers from IBM Integrated Product Development (IPD) in Somers, New York. Procter & Gamble (P&G) Procter & Gamble Corp. (P&G) is a leading producer of consumer goods. It currently leads in global sales and marketshare among all fabric care, baby care, feminine care, and hair care products. It currently has over 130,000 employees in 80+ countries. Twenty-two of its brands have annual gross sales exceeding $1 billion each. In fiscal year 2005/2006, P&G invested $2.075 billion or 3 percent of net sales in research and development (R&D). This ranks them as one of the top 20 largest research & development investors among U.S.-based companies. P&G has more Ph.D.s working in labs around the world than the combined science and engineering faculties of Harvard, MIT, and Berkeley. We met with the management of P&G’s New Initiative Delivery team in Cincinnati, Ohio. Eli Lilly is a global pharmaceutical company and one of the world’s largest corporations. It was founded over 130 years ago and currently employs approximately 42,000 people worldwide, including 13,991 employed at its headquarters in Indianapolis, Ind. Approximately 8,336 employees (19 percent of the total work force) are engaged in research and development (R&D); clinical research is conducted in over 50 countries; there are R&D facilities in 9 countries; and manufacturing plants in 13 countries. Its products are marketed in 143 countries. Lilly’s net sales in 2005 were $14.6 billion. Eli Lilly strives to grow sales by 6 percent to 7 percent each year. In 2005, $3 billion was spent on R&D, a $334.4 million increase from the previous year. Currently, R&D represents 20.7 percent of sales. Lilly’s total R&D investment in the last 5 years from continuing operations was $12.5 billion. We met with managers from Eli Lilly’s Corporate Headquarters in Indianapolis, Ind. Caterpillar is a technology leader and the world’s leading manufacturer of construction and mining equipment, diesel and natural gas engines, and industrial gas turbines. In 2005, its total sales and revenues were $36.3 billion, and its total R&D expenditures exceeded $1 billion, compared with $20.5 billion sales and $696 million R&D in 2001. Between 2001 and 2005, the average return on equity of its stockholders’ shares more than doubled. Caterpillar has over 85,000 employees, and over 105,000 people are employed by Caterpillar’s dealers worldwide. We met with managers responsible for Caterpillar’s New Product Introduction (NPI) process in Peoria, Illinois. Best Practices: Stronger Practices Needed to Improve DOD Technology Transition Processes. GAO-06-883. Washington, D.C.: September 14, 2006. Defense Acquisitions: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD’s Revised Policy. GAO-06-368. Washington, D.C.: April 13, 2006. DOD Acquisition Outcomes: A Case for Change. GAO-06-257T. Washington, D.C.: November 15, 2005. Defense Acquisitions: Stronger Management Practices Are Needed to Improve DOD’s Software-Intensive Weapon Acquisitions. GAO-04-393. Washington, D.C.: March 1, 2004. Best Practices: Setting Requirements Differently Could Reduce Weapon Systems’ Total Ownership Costs. GAO-03-57. Washington, D.C.: February 11, 2003. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. Best Practices: A More Constructive Test Approach Is Key to Better Weapon System Outcomes. GAO/NSIAD-00-199. Washington, D.C.: July 31, 2000. Defense Acquisition: Employing Best Practices Can Shape Better Weapon System Decisions. GAO/T-NSIAD-00-137. Washington, D.C.: April 26, 2000. Best Practices: DOD Training Can Do More to Help Weapon System Programs Implement Best Practices. GAO/NSIAD-99-206. Washington, D.C.: August 16, 1999. Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes. GAO/NSIAD-99-162. Washington, D.C.: July 30, 1999. Defense Acquisitions: Best Commercial Practices Can Improve Program Outcomes. GAO/T-NSIAD-99-116. Washington, D.C.: March 17, 1999. Defense Acquisition: Improved Program Outcomes Are Possible. GAO/T-NSIAD-98-123. Washington, D.C.: March 17, 1998. Best Practices: DOD Can Help Suppliers Contribute More to Weapon System Programs. GAO/NSIAD-98-87. Washington, D.C.: March 17, 1998. Best Practices: Successful Application to Weapon Acquisition Requires Changes in DOD’s Environment. GAO/NSIAD-98-56. Washington, D.C.: February 24, 1998. Best Practices: Commercial Quality Assurance Practices Offer Improvements for DOD. GAO/NSIAD-96-162. Washington, D.C.: August 26, 1996. | Over the next several years, the Department of Defense (DOD) plans to invest $1.4 trillion in major weapons programs. While DOD produces superior weapons, GAO has found that the department has failed to deliver weapon systems on time, within budget, and with desired capabilities. While recent changes to DOD's acquisition policy held the potential to improve outcomes, programs continue to experience significant cost and schedule overruns. GAO was asked to examine how DOD's processes for determining needs and allocating resources can better support weapon system program stability. Specifically, GAO compared DOD's processes for investing in weapon systems to the best practices that successful commercial companies use to achieve a balanced mix of new products, and identified areas where DOD can do better. In conducting its work, GAO identified the best practices of: Caterpillar, Eli Lilly, IBM, Motorola, and Procter and Gamble. To achieve a balanced mix of executable development programs and ensure a good return on their investments, the successful commercial companies GAO reviewed take an integrated, portfolio management approach to product development. Through this approach, companies assess product investments collectively from an enterprise level, rather than as independent and unrelated initiatives. They weigh the relative costs, benefits, and risks of proposed products using established criteria and methods, and select those products that can exploit promising market opportunities within resource constraints and move the company toward meeting its strategic goals and objectives. Investment decisions are frequently revisited, and if a product falls short of expectations, companies make tough go/no-go decisions. The companies GAO reviewed have found that effective portfolio management requires a governance structure with committed leadership, clearly aligned roles and responsibilities, portfolio managers who are empowered to make investment decisions, and accountability at all levels of the organization. In contrast, DOD approves proposed programs with much less consideration of its overall portfolio and commits to them earlier and with less knowledge of cost and feasibility. Although the military services fight together on the battlefield as a joint force, they identify needs and allocate resources separately, using fragmented decision-making processes that do not allow for an integrated, portfolio management approach like that used by successful commercial companies. Consequently, DOD has less assurance that its investment decisions address the right mix of warfighting needs, and, as seen in the figure below, it starts more programs than current and likely future resources can support, a practice that has created a fiscal bow wave. If this trend goes unchecked, Congress will be faced with a difficult choice: pull dollars from other high-priority federal programs to fund DOD's acquisitions or accept gaps in warfighting capabilities. |
Agricultural trade can be classified into two categories—bulk commodities and high-value products. Bulk commodities are raw agricultural products that have little value added after they leave the farm gate. High-value products, by contrast, either require special care in packing and shipping or have been subjected to processing. High-value products constitute the fastest growing component of the world’s agricultural trade. By 1998, they are expected to represent 75 percent of world agricultural trade, according to FAS. The United States’ greatest strength in agricultural exports has traditionally been in bulk commodities, and it has consistently operated as the world’s largest exporter of them. However, the member nations of the European Union (EU) constitute the world’s largest exporter of high-value agricultural products (see app. I for a list of the 12 top exporters of high-value products in 1992). Because purchasing decisions for bulk commodities are based largely on price, success in exporting them depends primarily on maintaining a cost advantage in their production and transport. Because HVP purchasing decisions depend on product attributes, such as brand-name packaging and quality image, in addition to price, success in the export of HVPs is based more on the exporter’s skill in developing and marketing the product. Exporting countries have a variety of programs and organizations to assist exporters in developing markets for high-value products. While the recent multilateral trade agreement of the Uruguay Round (UR) of the General Agreement on Tariffs and Trade (GATT) would limit the extent to which countries could provide subsidies to the agricultural sector, it would not limit the extent to which countries could fund market development activities. As the UR agreement reduces export subsidies, market development efforts may become a more important component in increasing agricultural exports. To obtain information to meet our objectives, we conducted telephone interviews and met in the United States with officials of foreign marketing organizations and the embassies of the four European countries we reviewed. We also analyzed reports by, and conducted telephone interviews with, FAS attachés posted in the four countries. To learn about the activities of the United States, we met with representatives of USDA’s FAS and Economic Research Service (ERS) in Washington, D.C., and conducted telephone interviews with representatives of regional trade associations. Appendix V contains a more detailed description of our objectives, scope, and methodology. We did our work between February and August 1994 in accordance with generally accepted government auditing standards. We obtained oral agency comments from FAS. These comments are discussed at the end of this letter. The structure for foreign market development of HVPs is fundamentally different in the United States than in three of the four European countries we reviewed. France, Germany, and the United Kingdom each rely primarily on a centralized marketing organization to promote their agricultural exports. The organizations are funded either entirely through user fees and levies on private industry, as with Germany, or through a combination of private and public funds, as with France and the United Kingdom. Both public and private sector representatives play a role in managing the marketing organizations. They conduct a number of different types of promotions, provide an array of services to exporters, and promote nearly all high-value products and commodities. The Netherlands does not have a single primary market development organization but rather a number of independent commodity boards and trade associations. These boards and associations, in coordination with the government, do most of that country’s foreign market development. (See app. II for a more detailed description of foreign market development by these four countries.) In France, the Société pour l’Expansion des Ventes des Produits Agricoles et Alimentaires (SOPEXA) is responsible for foreign market development. Jointly owned by the French government and private trade organizations, SOPEXA promotes French food and wine in about 23 foreign countries. The Ministry of Agriculture has ultimate control over SOPEXA and sits on its board of directors, but French officials said the Ministry has minimal influence over SOPEXA’s day-to-day operations and activities. In addition to SOPEXA, France has a quasi-government agency, the Centre Français du Commerce Extérieur (CFCE), that assists exporters of industrial and agricultural products by doing market research and providing foreign market information. Like France, Germany promotes most of its HVP exports through a quasi-governmental agency, the Centrale Marketinggesellschaft der deutschen Agrarwirtschaft (CMA). CMA maintains offices in eight foreign countries and generically promotes most German food and agricultural products. CMA is run by representatives of the German food industry and is guided by a council composed of both industry and government representatives. The wine and forestry industries have their own marketing boards, which also do foreign market development. Most HVP foreign market development in the United Kingdom is undertaken by Food From Britain, an organization created by the British government to centralize and coordinate agricultural marketing activities. It is controlled by a council appointed by the Ministry of Agriculture, Fisheries and Food and has offices in seven foreign countries. The Meat and Livestock Commission also conducts foreign market development activities of its own. In the Netherlands, several independent commodity boards and trade associations, which operate without government control, administer most activities for HVP foreign market development. The Ministry of Agriculture, Nature Management and Fisheries helps coordinate the promotional activities of the commodity boards and trade associations and also conducts some foreign market development activities of its own. In the United States, not-for-profit trade associations have primary responsibility for conducting their own marketing activities in foreign countries. USDA provides funding to support their export activities through its Market Promotion Program (MPP) and the Foreign Market Development Program, also known as the Cooperator Program. MPP provides money to the trade associations to conduct generic promotions or to fund private companies’ brand-name promotions. MPP activities are predominantly for high-value products. The Cooperator Program provides financial and technical support to U.S. cooperators, representing about 40 specific commodity sectors, who work at overseas offices to increase long-term access to and demand for U.S. products. The program is mostly aimed at promoting bulk commodities, but a portion of the program’s budget supports HVP market development (see app. III for a more detailed discussion of U.S. foreign market development). USDA’s Foreign Agricultural Service administers these programs and provides funding, but the individual trade associations themselves are generally responsible for carrying out the export activities. FAS conducts some promotional activities of its own and provides some services to exporters through its AgExport Services Division and its foreign attaché service. Although the Europeans, according to FAS, provide greater total support for agriculture in general, the four European countries we reviewed spent less in 1993 on foreign market development than did the United States, both in absolute terms and in proportion to their HVP exports. The total spending in 1993 on HVP market development in the four competitor countries varied considerably, from about $13 million for the United Kingdom to about $76 million for France, based on estimates by FAS and information provided by the foreign marketing organizations. The United States, by comparison, spent about $151 million in 1993 on generic or nationally oriented foreign market development for high-value products, mostly through the Market Promotion Program. Available information shows that the United States spent more than the four European countries, not just in terms of absolute dollars, but also as a percentage of HVP exports. While the United States spent about $65 in 1993 on foreign market development for every $10,000 in HVP exports, France spent about $30, the Netherlands about $21, Germany about $19, and the United Kingdom about $15 (see table 1). Because so many factors influence a country’s export levels, these figures alone are not sufficient to make judgments about the effectiveness of the countries’ foreign market development programs. The four European countries we reviewed relied largely on private funds, rather than government expenditures, in 1993 for their HVP market development. The European marketing organizations that promoted high-value products included various types of public-private partnerships. In all cases, however, the organizations were financed, at least in part, either through user fees or a system of mandatory levies on the agricultural industry. The sectors of agribusiness that paid the levies varied by country. They typically included producers but also sometimes included processors, wholesalers, or traders. The annual government expenditures for foreign market development ranged from zero to $29 million in 1993 in the four European countries we reviewed, according to estimates by FAS and information provided by the foreign marketing organizations. The portion of the country’s total foreign market development that was funded by government expenditures ranged from zero percent to 42 percent. By contrast, the U.S. government spent about $121 million on HVP foreign market development in 1993, representing about 80 percent of all U.S. spending on foreign market development for HVPs. In France, about 38 percent of total foreign market development for agriculture was funded by government expenditures in 1993. About 35 percent of the 1993 budget of SOPEXA, the export promotion agency, was provided by the government; the remainder came from producers or producer groups who benefited from SOPEXA’s promotions and who collected funds from producer levies. Government expenditures also funded 65 percent of CFCE, the market information agency, with the remainder coming from user fees. In Germany, CMA, the quasi-governmental export promotion agency, did not receive public funds in 1993. For many years, the agency has been financed entirely through compulsory levies on agricultural producers and processors. In the United Kingdom, about 42 percent of total foreign market development for HVPs was paid for by public funds. Food From Britain received about 60 percent of its funding in 1993 from government expenditures, with the rest coming from commodity marketing boards and user fees from individual exporters who requested services. The Meat and Livestock Commission, which also does export promotion of its own, received about 12 percent of its budget from government expenditures. In the Netherlands, more than 90 percent of foreign market development expenditures in 1993 were made by commodity boards and trade associations, which raised money through levies on producers and traders. The remaining market development activity was conducted by the Netherlands’ Ministry of Agriculture, Nature Management and Fisheries. In the United States, government expenditures funded an estimated 80 percent of total HVP foreign market development in 1993. FAS paid 81 percent of the cost of HVP activities sponsored under the Market Promotion Program, while the trade organizations sponsoring the activities contributed the remainder. FAS also contributed 73 percent of the cost of HVP activities for the Cooperator Program. In addition, FAS funded about 62 percent of the $6.1 million in activities sponsored by its AgExport Services Division, which assists in HVP foreign market development. (See app. IV for information about the five countries’ marketing organizations and estimates of their expenditures.) Foreign market development is only one of many factors that influence a country’s success in exporting HVPs. For example, the government expenditures previously cited include spending on foreign market development activities, such as market research and consumer promotion but do not include spending on other kinds of agricultural support and export programs, such as direct export subsidies, domestic subsidies, and price supports. These programs also serve, directly or indirectly, to increase HVP exports, and spending for such programs is estimated by FAS to be far higher in Europe than it is in the United States. According to FAS, total agricultural support spending in 1992 was $46.7 billion in the European Union, compared with $10.9 billion in the United States. Furthermore, the bulk of agricultural exports of the four European countries we reviewed went to other European Union members. For several reasons, an EU producer is likely to have an easier time exporting to another EU country than a U.S. producer would. The EU’s Common Agricultural Policy has created a unified set of trade regulations and eliminated among members most tariff and nontariff trade barriers, making trade between EU members somewhat comparable to U.S. interstate commerce. European producers are also more likely to be familiar with the consumer preferences, customs, and distribution systems of other European countries. Moreover, because of the vast domestic market in the United States, U.S. producers may be less likely to seek out export markets than European producers, who have smaller domestic markets and often have a long history of exporting a substantial portion of their production. The U.S. and European marketing organizations we reviewed carry out similar foreign market development activities, though the emphasis they put on the various activities differs. The activities conducted generally included market research, consulting services, trade servicing, consumer promotions, advertising, and sponsorship at trade shows. Market research is often considered the foundation of market development. It is conducted to determine the potential demand for a particular product, to assess consumer preferences, or to develop statistical information on agricultural trade and economics. Consulting services may be offered to provide advice to exporters on appropriate promotions and to help exporters learn about the laws, regulations, and requirements of particular markets. Trade servicing involves developing trade leads to match up exporters with appropriate importers. In addition, some organizations advertise their country’s products in trade journals and other publications in order to support retail promotion strategies and to enhance the image and awareness of their country’s products. Consumer-oriented activities include in-store promotions, where advertising materials and product samples are distributed at point-of-sale locations. These activities may serve either to promote a particular product or to enhance the overall image of a country’s food products. Additionally, some organizations provide retail stores with advertising displays and decorations. Some countries’ marketing organizations also do direct consumer advertising on television, on radio, or in print. Finally, marketing organizations assist their exporters by coordinating or subsidizing their participation in international trade shows. Trade shows allow exporters to test a market, meet potential buyers, and monitor the competition. In general, the U.S. programs place more emphasis on consumer advertising than do the European programs. MPP funds are often used by U.S. companies or producer groups to finance product advertising campaigns, which tend to be an expensive form of market promotion. Representatives of the European marketing organizations generally told us that consumer advertising was too costly, given their limited budgets. They focused more on influencing wholesalers and usually placed a higher priority on trade shows. They attempted to reach consumers more through vehicles such as in-store promotions than through direct media advertising. In our 1990 review of foreign market development organizations, we reported that many other nations integrated their foreign market development activities—coordinating their market research, promotional activities, and production capabilities to meet consumer demand in foreign markets. U.S. producers and producer groups did not coordinate their activities in the same manner, nor did they strategically target markets as did some of their competitors. This may be because European marketing organizations, such as France’s SOPEXA and Germany’s CMA, promote nearly all agricultural products and thus can develop integrated marketing plans for increasing their countries’ HVP exports. The system of foreign market development in the United States is far more decentralized. As we have reported, USDA has been slow to develop a USDA-wide marketing strategy that would assist U.S. producers in becoming more coordinated and marketing oriented in their approach to promoting U.S. exports. The European organizations we reviewed perform little formal, quantified evaluation of their HVP promotion efforts. Representatives of foreign market development organizations we contacted all said that quantifying the overall success of foreign market development is extremely difficult because of the large number of variables that affect a country’s exports. Instead, evaluations of foreign market development programs are based more on the subjective observations and judgments of marketing staff and on the satisfaction of producers involved in the promotional efforts. Representatives of the foreign organizations said they do such things as conduct surveys of trade show participants to gauge their satisfaction or measure the number of buyer contacts that result from an advertisement in a trade journal. USDA attempts to measure the effectiveness of activities funded under MPP by evaluating the results of participants’ ongoing activities against measurable goals provided in the participants’ funding proposals. USDA said it is also developing a methodology that would identify activities that have not been effective in expanding or maintaining market share. The methodology would include a statistical analysis that would compare export sales with a participant’s MPP expenditures in both overall and individual markets. In addition, an FAS official told us that an econometric model is under development that would evaluate the effectiveness of MPP participants’ expenditures in increasing U.S. exports. We discussed the information in this report with FAS officials, including the Administrator, on September 9, 1994, and incorporated their comments where appropriate. FAS generally agreed with the report’s findings. FAS emphasized that the UR agreement may lead European governments to increase their funding of foreign market development in the near future. FAS said some European governments may try to shift funds previously spent on export subsidies, which would be restricted under this agreement, to market promotion programs, which would not be directly restricted under the UR agreement. FAS said it will be closely monitoring such spending as the UR agreement goes into effect. We are sending copies of this report to the Secretary of Agriculture and other interested parties. We will make copies available to others upon request. If you have any questions concerning this report, please contact me at (202) 512-4812. The major contributors to this report are listed in appendix VI. Foreign market development organizations are characterized by various organizational and funding structures. The organizations generally consist of some form of public-private partnership funded by some combination of government funds, user fees, and legislated levies on private industry. We reviewed the organizations that do foreign market development in four European countries: (1) France, (2) Germany, (3) the United Kingdom, and (4) the Netherlands. France was the world’s second largest high-value product exporter in 1992, with more than 70 percent of its agricultural exports going to other European Union (EU) countries. Wine, cheese, and meats were among its major HVP exports. France has a very strong food-processing sector and enjoys a reputation for aggressive and well-focused foreign market development. The majority of French HVP foreign market development is conducted by the Société pour l’Expansion des Ventes des Produits Agricoles et Alimentaires (SOPEXA), whose mission is the expansion of export markets for French food and wine. SOPEXA is jointly owned by the French government and various agricultural trade organizations, but the government has minimal influence on its day-to-day operations. About 35 percent of SOPEXA’s budget came from the Ministry of Agriculture in 1993; the remainder came from producers or producer groups that benefited from SOPEXA’s promotions and that collect funds from product levies. SOPEXA has offices in about 23 foreign countries. Its foreign market development expenditures in 1993 were about $68.6 million. The Centre Français du Commerce Extérieur (CFCE) is a quasi-government agency that seeks to increase exports by providing statistical information, market studies, and consulting services to French exporters. About 15 percent of its activity relates to food and agricultural exports. CFCE provides its services to both public agencies, such as the Ministry of Agriculture and SOPEXA, and to private exporters, who funded about 35 percent of CFCE’s budget in 1993 through user fees for the services they receive. CFCE spent about $7 million of its budget in 1993 on activities related to food and agriculture. It had about 180 foreign offices, the majority staffed by French commercial attachés. The U.S. Department of Agriculture’s Foreign Agricultural Service (FAS) office in Paris said it expects the French government to continue its strong support for foreign market development through SOPEXA and that there is likely to be an increased emphasis on the promotion of wine, cheese, and other highly processed food items. At the same time, government funding for CFCE is expected to gradually decline as private sector financing of its activities increases. Germany is a sophisticated food processor and was the world’s fourth largest exporter of high-value agricultural products in 1992. Its major HVP exports included milk, cheese, meats, and processed foods. More than two-thirds of its agricultural exports went to other EU countries in 1993. Foreign market development is conducted by the Centrale Marketinggesellschaft der deutschen Agrarwirtschaft (CMA), a quasi-governmental agency that does national generic promotions for most German food and agricultural products. CMA is funded by mandatory legislated levies on agricultural producers and processors, as well as by user fees. It is directed by a supervisory board composed of representatives of industry and government. The board appoints CMA’s top managers. CMA is known for the breadth of its services, which it provides to a broad spectrum of the German agricultural industry, including the producer, processor, retailer, and exporter. Its marketing efforts include not just product promotion but also market research and distribution. CMA represents nearly all agricultural products, with the exception of wine and forest products; these have their own independent marketing boards. In 1993, CMA spent an estimated $32 million on foreign market development. All of its funds came from the private sector through mandatory levies; the government provided no funds for foreign market development of HVPs. In addition, the Wine Marketing Board spent approximately $6.3 million, and the Forestry Marketing Board an estimated $400,000, on foreign market development. The United Kingdom was the world’s ninth largest HVP exporter in 1992. Its major high-value product exports included alcoholic beverages and meat, and more than 60 percent of its 1992 agricultural exports went to other EU nations. Promotion of agricultural exports is mostly the responsibility of Food From Britain, a quasi-governmental corporation created in 1983 to centralize and coordinate the United Kingdom’s agricultural marketing efforts. The organization is overseen by a council composed of industry representatives who are appointed by the Minister of Agriculture, Fisheries and Food. Food From Britain has offices in seven foreign countries. Its activities include retail promotions, seminars, media events, and consulting services. In 1993, Food From Britain spent about $7.9 million on foreign market development. About 60 percent of its budget came from a government grant. Most of the rest came from contributions by commodity organizations and from user fees from exporters who benefited from Food From Britain’s services. A separate organization, the Meat and Livestock Commission, also does foreign market development, totaling about $4.6 million in 1993. The United Kingdom’s HVP foreign market development spending is small relative to the other European countries and the United States. According to the FAS office in London and British officials that we spoke with, there has been increasing public discussion in the United Kingdom about the need to more aggressively promote agricultural exports. Food From Britain is expected to focus almost exclusively on export promotion, leaving domestic promotional activities to other organizations, according to its U.S. representative. In addition, according to an official from the Ministry of Agriculture, Fisheries and Food, the government is committed to reducing Food From Britain’s reliance on government funding and to have it rely more on private industry funding. At the same time, however, FAS said the British government is considering starting a new program to help fund foreign market development for agricultural products. The Netherlands was the world’s largest exporter of high-value agricultural products in 1992. Its major exports were meats, dairy products, fresh vegetables, and cut flowers. More than 70 percent of its total agricultural exports went to EU countries in 1992. The majority of Dutch HVP foreign market development is conducted through commodity boards or industry trade associations, such as the Dutch Dairy Bureau and the Flower Council of Holland. These organizations are independent of government control and are funded through levies on producers, wholesalers, processors, and traders. The combined export promotion budgets for these organizations in 1993 were estimated at $59.3 million. Most of the promotional activity was targeted at other EU nations. The Dutch Ministry of Agriculture, Nature Management and Fisheries also conducts generic promotional activities, usually through its agricultural attachés who are posted abroad. About 50 percent of the Ministry’s $4.8 million promotion budget in 1993 was used to organize trade exhibitions, while trade advertising and in-store promotions accounted for about 15 percent. Other activities included trade servicing and basic market research. The Ministry and the private commodity organizations work together closely and frequently collaborate in their market development activities. Officials at the Dutch embassy in Washington, D.C., and Dutch promotion organizations told us that because of budget constraints, the Dutch government is moving toward privatization of agricultural export promotion. The subsidy provided to exhibitors at trade shows has been reduced, and the Ministry has diminished its role in market reporting and trade leads, increasingly turning those functions over to the private trade associations. Most foreign market development of U.S. high-value products is carried out by not-for-profit trade associations. These associations typically promote a single commodity or group of related commodities and are generally financed, at least in part, through producer contributions. The trade associations receive most of their funds for foreign market development from the U.S. government via USDA’s Market Promotion Program (MPP). MPP operates through not-for-profit trade associations that either conduct generic promotions themselves or pass funds along to for-profit companies to conduct brand-name promotions. Promotional activities under MPP include such things as market research, retail promotions, and consumer advertising. In 1993, U.S. producers and trade associations spent about $136.5 million on overseas promotional activities for high-value products sponsored by MPP. The government paid about 81 percent of this cost, or about $111 million, and program participants, who are required to share in the cost of their promotions, paid the rest. In addition, some not-for-profit trade associations conducted foreign market development activities that were independent of MPP. USDA’s Foreign Market Development Program, also known as the Cooperator Program, provides funds to about 40 cooperators representing specific U.S. commodity sectors. These cooperators work overseas to build markets for U.S. agricultural products through such activities as trade servicing, technical assistance, and consumer promotions. The Cooperator Program supports mostly bulk products, but a portion of funds for the program went to promote high-value products in 1993. USDA funding for high-value product market development under the Cooperator Program was about $6 million in 1993. The cooperators contributed an additional $2 million. USDA’s Foreign Agricultural Service has the primary government role in market development and promotion of HVPs. In addition to administering MPP and the Cooperator Program, FAS provides a variety of services to U.S. agricultural exporters. Among these are a database that lists foreign buyers and U.S. suppliers, FAS publications that highlight trade opportunities in export markets, and support or sponsorship of international trade shows. In addition, FAS maintains an overseas network of about 75 attaché posts and agricultural trade offices that seek to increase U.S. agricultural exports through commodity reporting, trade policy work, and market development activities. FAS’ AgExport Services Division provided about $3.8 million in 1993 to these overseas offices to fund such promotional activities as trade shows, trade servicing, consumer promotions, publications, and trade missions. Through user fees, exporters contributed an additional $2.3 million to these activities. Government division that supports promotional activities at overseas posts (Table notes on next page) Our objectives were to obtain information on (1) the organizations in France, Germany, the United Kingdom, and the Netherlands that help develop foreign markets for high-value agricultural products; (2) the programs of the U.S. Department of Agriculture for HVP foreign market development; and (3) the ways in which these five countries’ programs are evaluated to determine their effectiveness in increasing exports. To obtain information on the foreign market development efforts of France, Germany, the United Kingdom, and the Netherlands, we conducted telephone interviews and met in the United States with officials of foreign marketing organizations and the embassies of the four countries. We also analyzed reports by, and conducted telephone interviews with, FAS attachés posted in the four countries. In addition, we conducted a literature search of information related to foreign market development. To learn about the foreign market development activities of the United States, we reviewed relevant FAS documents and legislation and met with FAS representatives in Washington, D.C. In addition, we conducted telephone interviews with representatives of regional trade associations and met with representatives of USDA’s Economic Research Service. Because of the inherent difficulties in determining the effectiveness of market development activities, and because of our limited time frame, we did not evaluate the effectiveness of the European or U.S. market development activities. However, we did discuss with the countries’ program officials in the United States how they evaluated and determined the effectiveness of their programs. We also discussed U.S. efforts to evaluate promotion activities with representatives of FAS and reviewed documents describing their evaluation methodologies. Our review looked only at market development and promotion activities, which include such activities as consumer promotion, trade servicing, and market research. It did not include export subsidies, domestic subsidies, and internal price supports. The budgets of some of the foreign market development organizations we reviewed, such as Food From Britain and the Netherlands’ Ministry of Agriculture, Nature Management and Fisheries, were public information. However, the expenditures of certain other foreign organizations, such as Germany’s CMA and France’s SOPEXA, were not made public. We received estimates of their budgets from FAS staff overseas. We did not independently verify the budget estimates. We did, however, attempt to corroborate the estimates with representatives of the foreign organizations and with other sources. In some cases, the budgets of foreign market organizations did not clearly delineate between domestic versus export promotion, or bulk versus high-value product promotion. In these cases, we worked with FAS to provide a best estimate of the portion of the budget devoted to foreign market development of high-value products. There is no uniform scheme for classifying agricultural products, and there are various definitions for what constitutes a high-value product. The numbers used in this report for exports of U.S. and European HVPs are based on analysis by USDA’s Economic Research Service of data from the Food and Agriculture Organization of the United Nations. For the purposes of these 1992 export statistics, ERS’ definition of HVPs included semiprocessed foods, such as wheat flour and vegetable oil, but excluded certain products that did not meet ERS’ statistical definition of an agricultural product. Thus the HVP export data for 1992 did not include cigarettes, distilled spirits, fishery products, or forestry products. Trade statistics sometimes exclude intra-EU trade, since this trade is sometimes viewed as comparable to U.S. interstate commerce. However, we have included intra-EU trade in our trade statistics, since the European organizations we reviewed treat trade with other EU countries as foreign (as opposed to domestic) market development, and since a considerable portion of their export promotion activity is within the EU. C. Jeffrey Appel, Evaluator-in-Charge Jason Bromberg, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the structure, funding, and promotional activities of the organizations that develop foreign markets for high-value agricultural products (HVP), focusing on the: (1) organizations in France, Germany, the United Kingdom, and the Netherlands that help develop foreign markets for HVP; (2) Department of Agriculture's (USDA) foreign market development programs; and (3) ways in which these countries' programs are evaluated to determine their effectiveness in increasing exports. GAO found that: (1) France, Germany, and the United Kingdom each have an integrated market development organization that provides an array of services and promotes most agricultural products; (2) the Netherlands relies primarily on independent commodity associations to promote its agricultural products; (3) all of the countries spent less on foreign market development than the United States in 1993; (4) because so many factors influence a country's export levels, information on promotion expenditures alone is not sufficient to determine the effectiveness of a country's foreign market development efforts; (5) the countries' foreign market development programs are financed mostly by the private sector, while U.S. foreign market development programs are coordinated by the USDA Foreign Agricultural Service; and (6) the market development organizations reviewed and the United States generally engage in the same kinds of promotional activities, including market research, trade shows, consumer promotions, and trade servicing. |
As we reported in July 2013, DHS has not yet fulfilled the 2004 statutory requirement to implement a biometric exit capability, but has planning efforts under way to report to Congress in time for the fiscal year 2016 budget cycle on the costs and benefits of such a capability at airports and seaports. Development and implementation of a biometric exit capability has been a long-standing challenge for DHS. Since 2004, we have issued a number of reports on DHS’s efforts to implement a biometric entry and exit system. For example, in February and August 2007, we found that DHS had not adequately defined and justified its proposed expenditures for exit pilots and demonstration projects and that it had not developed a complete schedule for biometric exit implementation. Further, in September 2008, we reported that DHS was unlikely to meet its timeline for implementing an air exit system with biometric indicators, such as fingerprints, by July 1, 2009, because of several unresolved issues, such as opposition to the department’s published plan by the airline industry. In 2009, DHS conducted pilot programs for biometric air exit capabilities in airport scenarios, and in August 2010 we found that there were limitations with the pilot programs—for example, the pilot programs did not operationally test about 30 percent of the air exit requirements identified in the evaluation plan for the pilot programs—that hindered DHS’s ability to inform decision making for a long-term air exit solution and pointed to the need for additional sources of information on air exit’s operational impacts. In an October 2010 memo, DHS identified three primary reasons why it has been unable to determine how and when to implement a biometric exit capability at airports: (1) The methods of collecting biometric data could disrupt the flow of travelers through airport terminals; (2) air carriers and airport authorities had not allowed DHS to examine mechanisms through which DHS could incorporate biometric data collection into passenger processing at the departure gate; and (3) challenges existed in capturing biometric data at the point of departure, including determining what personnel should be responsible for the capture of biometric information at airports. In July 2013, we reported that, according to DHS officials, the challenges DHS identified in October 2010 continue to affect the department’s ability to implement a biometric air exit system. With regard to an exit capability at land ports of entry, in 2006, we reported that according to DHS officials, for various reasons, a biometric exit capability could not be implemented without incurring a major impact on land facilities. For example, at the time of our 2006 report, DHS officials stated that implementing a biometric exit system at land ports of entry would require new infrastructure and would produce major traffic congestion because travelers would have to stop their vehicles upon exit to be processed. As a result, as of April 2013, according to DHS officials, the department’s planning efforts focus on developing a biometric exit capability for airports, with the potential for a similar solution to be implemented at seaports, and DHS’s planning documents, as of June 2013, do not address plans for a biometric exit capability at land ports of entry. Our July 2013 report found that since April 2011, DHS has taken various actions to improve its collection and use of biographic data to identify potential overstays. For example, DHS is working to address weaknesses in collecting exit data at land borders by implementing the Beyond the Border initiative, through which DHS and the Canada Border Services Agency exchange data on travelers crossing the border between the Because an entry into Canada constitutes a United States and Canada. departure from the United States, DHS will be able to use Canadian entry data as proxies for U.S. departure records. As a result, the Beyond the Border initiative will help address those challenges by providing a new source of biographic data on travelers departing the United States at land ports on the northern border. Our July 2013 report provides more information on DHS’s actions to improve its collection and use of biographic entry and exit data. In 2011, DHS directed S&T, in coordination with other DHS component agencies, to research long-term options for biometric air exit. In May 2012, DHS reported internally on the results of S&T’s analysis of previous air exit pilot programs and assessment of available technologies, and the report made recommendations to support the planning and development In that report, DHS concluded that the of a biometric air exit capability.building blocks to implement an effective biometric air exit system were available. In addition, DHS’s report stated that new traveler facilitation tools and technologies—for example, online check-in, self-service, and paperless technology—could support more cost-effective ways to screen travelers, and that these improvements should be leveraged when developing plans for biometric air exit. However, DHS officials stated that there may be challenges to leveraging new technologies to the extent that U.S. airports and airlines rely on older, proprietary systems that may be difficult to update to incorporate new technologies. Furthermore, DHS reported in May 2012 that significant questions remained regarding (1) the effectiveness of current biographic air exit processes and the error rates in collecting or matching data, (2) methods of cost-effectively integrating biometrics into the air departure processes (e.g., collecting biometric scans as passengers enter the jetway to board a plane), (3) the additional value biometric air exit would provide compared with the current biographic air exit process, and (4) the overall value and cost of a biometric air exit capability. The report included nine recommendations to help inform DHS’s planning for biometric air exit, such as directing DHS to develop explicit goals and objectives for biometric air exit and an evaluation framework that would, among other things, assess the value of collecting biometric data in addition to biographic data and determine whether biometric air exit is economically justified. DHS reported in May 2012 that it planned to take steps to address these recommendations by May 2014; however, as we reported in July 2013, according to DHS Office of Policy and S&T officials, the department does not expect to fully address these recommendations by then. In particular, DHS officials stated that it has been difficult coordinating with airlines and airports, which have expressed reluctance about biometric air exit because of concerns over its effect on operations and potential costs. To address these concerns, DHS is conducting outreach and soliciting information from airlines and airports regarding their operations. In addition, DHS officials stated that the department’s efforts to date have been hindered by insufficient funding. In its fiscal year 2014 budget request for S&T, DHS requested funding for a joint S&T-CBP Air Entry/Exit Re-Engineering Apex project. Apex projects are crosscutting, multidisciplinary efforts requested by DHS components that are high- priority projects intended to solve problems of strategic operational importance. According to DHS’s fiscal year 2014 budget justification, the Air Entry/Exit Re-Engineering Apex project will develop tools to model and simulate air entry and exit operational processes. Using these tools, DHS intends to develop, test, pilot, and evaluate candidate solutions. As of April 2013, DHS Policy and S&T officials stated that they expect to finalize goals and objectives for a biometric air exit system in the near future and are making plans for future scenario-based testing. Although DHS’s May 2012 report stated that DHS would take steps to address the report’s recommendations by May 2014, DHS officials told us that the department’s current goal is to develop information about options for biometric air exit and to report to Congress in time for the fiscal year 2016 budget cycle regarding (1) the additional benefits that a biometric air exit system provides beyond an enhanced biographic exit system and (2) costs associated with biometric air exit. However, as we reported in July 2013, DHS has not yet developed an evaluation framework, as recommended in its May 2012 report, to determine how the department will evaluate the benefits and costs of a biometric air exit system and compare it with a biographic exit system. According to DHS officials, the department needs to finalize goals and objectives for biometric air exit before it can develop such a framework, and in April 2013 these officials told us that the department plans to finalize these elements in the near future. However, DHS does not have time frames for when it will subsequently be able to develop and implement an evaluation framework to support the assessment it plans to provide to Congress. According to A Guide to the Project Management Body of Knowledge, which provides standards for project managers, specific goals and objectives should be conceptualized, defined, and documented in the planning process, along with the appropriate steps, time frames, and milestones needed to achieve those results. In fall 2012, DHS developed a high-level plan for its biometric air exit efforts, which it updated in May 2013, but this plan does not clearly identify the tasks needed to develop and implement an evaluation framework. For example, the plan does not include a step for developing the methodology for comparing the costs and benefits of biometric data against those for collecting biographic data, as recommended in DHS’s May 2012 report. Furthermore, the time frames in this plan are not accurate as of June 2013 because DHS is behind schedule on some of the tasks and has not updated the time frames in the plan accordingly. For example, DHS had planned to begin scenario-based testing for biometric air exit options in August 2013; however, according to DHS officials, the department now plans to begin such testing in early 2014. A senior official from DHS’s Office of Policy told us that DHS has not kept the plan up to date because of the transition of responsibilities within DHS; specifically, in March 2013, pursuant to the explanatory statement for DHS’s 2013 appropriation, DHS established an office within CBP that is responsible for coordinating DHS’s entry and exit policies and operations.process as of June 2013, and CBP told us that it planned to establish an integrated project team in July 2013 that will be responsible for more detailed planning for the department’s biometric air exit efforts. DHS Policy and S&T officials agreed that setting time frames and milestones is important to ensure timely development and implementation of the evaluation framework in accordance with DHS’s May 2012 recommendations. According to DHS officials, implementation of a biometric air exit system will depend on the results of discussions between the department and Congress after the department provides this assessment of options for biometric air exit. In summary, we concluded in our July 2013 report that without robust planning that includes time frames and milestones to develop and implement an evaluation framework for this assessment, DHS lacks reasonable assurance that it will be able to provide this assessment to Congress for the fiscal year 2016 budget cycle as planned. Furthermore, any delays in providing this information to Congress could further affect possible implementation of a biometric exit system to address statutory requirements. Therefore, we recommended that the Secretary of Homeland Security establish time frames and milestones for developing and implementing an evaluation framework to be used in conducting the department’s assessment of biometric exit options. DHS concurred with this recommendation and indicated that its component agencies plan to finalize the goals and objectives for biometric air exit by January 31, 2014, and that these goals and objectives will be used in the development of an evaluation framework that DHS expects to have completed by June 30, 2014. Chairman Miller, Ranking Member Jackson Lee, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For information about this statement, please contact Rebecca Gambler, Director, Homeland Security and Justice, at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals making key contributions include Kathryn Bernet, Assistant Director; Frances A. Cook; Alana Finley; and Ashley D. Vaughan. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses the status of the Department of Homeland Security's (DHS) efforts to implement a biometric exit system. Beginning in 1996, federal law has required the implementation of an entry and exit data system to track foreign nationals entering and leaving the United States. The Intelligence Reform and Terrorism Prevention Act of 2004 required the Secretary of Homeland Security to develop a plan to accelerate implementation of a biometric entry and exit data system that matches available information provided by foreign nationals upon their arrival in and departure from the United States. In 2003, DHS initiated the U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) program to develop a system to collect biographic data (such as name and date of birth) and biometric data (such as fingerprints) from foreign nationals at U.S. ports of entry. Since 2004, DHS has tracked foreign nationals' entries into the United States as part of an effort to comply with legislative requirements, and since December 2006, a biometric entry capability has been fully operational at all air, sea, and land ports of entry. However, GAO has identified a range of management challenges that DHS has faced in its effort to fully deploy a corresponding biometric exit capability to track foreign nationals when they depart the country. For example, in November 2009, GAO found that DHS had not adopted an integrated approach to scheduling, executing, and tracking the work that needed to be accomplished to deliver a biometric exit system. In these reports, GAO made recommendations intended to help ensure that a biometric exit capability was planned, designed, developed, and implemented in an effective and efficient manner. DHS generally agreed with our recommendations and has taken action to implement a number of them. Most recently, in July 2013, GAO reported on DHS's progress in developing and implementing a biometric exit system, as well as DHS's efforts to identify and address potential overstays--individuals who were admitted into the country legally on a temporary basis but then overstayed their authorized period of admission. This statement is based on GAO's July 2013 report and, like that report, discusses the extent to which DHS has made progress in developing and implementing a biometric exit system at air ports of entry, which is DHS's priority for a biometric exit capability. GAO concluded in its July 2013 report that without robust planning that includes time frames and milestones to develop and implement an evaluation framework for this assessment, DHS lacks reasonable assurance that it will be able to provide this assessment to Congress for the fiscal year 2016 budget cycle as planned. Furthermore, any delays in providing this information to Congress could further affect possible implementation of a biometric exit system to address statutory requirements. Therefore, GAO recommended that the Secretary of Homeland Security establish time frames and milestones for developing and implementing an evaluation framework to be used in conducting the department's assessment of biometric exit options. DHS concurred with this recommendation and indicated that its component agencies plan to finalize the goals and objectives for biometric air exit by January 31, 2014, and that these goals and objectives will be used in the development of an evaluation framework that DHS expects to have completed by June 30, 2014. |
Established by Congress in the Federal Employees’ Group Life Insurance Act of 1954 as a benefit to federal employees and their families and administered by OPM, FEGLI offers federal employees the opportunity to choose from a range of group term life insurance coverage options. FEGLI insurance is provided through a contract OPM has established with MetLife. MetLife’s Office of Federal Employees’ Group Life Insurance (OFEGLI) adjudicates claims under the FEGLI program and makes payments to FEGLI beneficiaries. Most federal employees, including part-time employees, are eligible for insurance under FEGLI, and approximately 85 percent purchase FEGLI coverage. Upon starting their federal employment, federal employees are automatically enrolled in FEGLI’s Basic life insurance coverage unless they file appropriate paperwork with their employing agency to opt out of the program. Basic life insurance coverage equals a federal employee’s annual salary rounded up to the next even thousand plus two thousand dollars, or $10,000, whichever is higher. Basic insurance also provides an extra benefit to employees under age 45, at no additional cost. This extra benefit doubles the amount of Basic insurance payable if the employee dies at age 35 or younger. The extra benefit decreases 10 percent each year until there is no extra benefit at age 45 and above. For Basic coverage, employees pay two-thirds of the premium determined by OPM, and the employing agencies pay the remaining third. The rate all covered employees, regardless of age, pay for each $1,000 of Basic insurance is $0.150 bi-weekly or $0.325 monthly. FEGLI also provides accidental death and dismemberment (AD&D) insurance as part of its Basic insurance at no additional cost. AD&D insurance protects employees in the event of a fatal accident or an accident which results in the loss of a limb or eyesight. For benefits to be paid, the death or loss must occur no later than 1 year from the date of the accident and must be a result of bodily injury sustained from that accident. Federal employees may also choose to purchase three types of Optional insurance in addition to Basic coverage—Options A, B, and C—by submitting a Life Insurance Election Form (SF 2817) within 60 days of beginning their employment to their human resources office. Option A offers $10,000 of life insurance coverage. Premiums for Option A coverage vary by age groups, as determined by OPM. These groups start with employees “under age 35,” progress in 5-year increments until age 59, and finish with a “60 and over” group. Bi- weekly and monthly costs for Option A coverage range from $0.30 and $0.65, respectively, for the “under age 35” group to $6.00 and $13.00, respectively, for the “60 and over” age group. Option B offers additional Optional insurance coverage in an amount of one to five multiples of the employee’s annual salary, after rounding the salary up to the next even thousand. For Option B coverage, age group designations also apply but begin with “under age 35,” continue in 5-year age increments until age 79, and end with an “80 and over” age group. Bi-weekly and monthly costs for each $1,000 in insurance can range from a low of $0.03 and $0.065, respectively, for employees under 35 to a high of $2.40 and $5.20, respectively, for employees 80 and older. Option C covers eligible family members of an employee or retiree, including the enrollee’s spouse and eligible dependent children. The employee selects one to five times an amount (a “multiple”)—$5,000 for a spouse and $2,500 for each eligible dependent child. If employees purchase optional coverage within the 60 days, no medical underwriting is necessary. For Option C coverage, the age group designations are the same as for Option B, and costs range from bi- weekly and monthly amounts of $0.27 and $0.59, respectively, per multiple for those under 35 to $6.00 and $13.00, respectively, per multiple for those 80 and older. When federal employees retire, FEGLI also offers Basic and Optional life insurance, and employees are able to choose among several retirement coverage levels after age 65. For Basic insurance in retirement, employees must choose whether to reduce their postretirement insurance level by 75 percent, 50 percent, or maintain full coverage. Those choosing the 75 percent reduction pay no premiums after reaching age 65. Those choosing the 50 percent reduction or full coverage option continue to pay premiums in amounts determined by OPM; the postretirement premium rates are greater than preretirement rates. When the 75 percent reduction in coverage is selected, OPM reduces the coverage level by 2 percent per month beginning at age 65, until 25 percent of the original coverage remains. If the 50 percent reduction is selected, the coverage level reduces by 1 percent per month beginning at age 65, until 50 percent of the original coverage remains. If no reduction is selected, the coverage does not reduce. Federal employees may also choose to continue Optional coverage into retirement and FEGLI offers several choices. Option A coverage reduces 2 percent per month beginning at age 65, to 25 percent of the preretirement amount, and no premiums are charged in retirement after the retiree reaches age 65. For Options B and C, employees desiring coverage must elect to continue one to five multiples of coverage into retirement, and elect whether to have all of those multiples retain full coverage or reduce by 100 percent, at a rate of 2 percent per month for 50 months, beginning at age 65. For the 100 percent reduction option, once the reduction starts, retirees do not pay premiums after reaching age 65. For the full coverage option, retirees continue to pay the full premium, as determined by OPM for the retiree’s specific age group. The following provides an example of FEGLI premiums for a 48-year old federal employee, married with three children, and earning $88,300 per year. According to OPM, Basic insurance would cost $13.65 bi-weekly and $354.90 annually. If the employee seeks to maximize Option B coverage by purchasing five times annual pay, Option B coverage would cost $40.05 bi-weekly and $1,041.30 annually. In this example, the employee purchases Basic and Optional life insurance coverage totaling $536,000, at an annual cost of $1,396.20. If this employee continues full Basic and Option B coverage after retirement, by choosing the No Reduction option for both, and retires at the age of 65 (assuming the same $88,300 salary), Basic insurance would cost $1,998.36 annually and Option B coverage would cost $8,330.40 annually. The total amount of Basic and Optional insurance for the employee at the time they retire would be $536,000 at an annual cost to the employee of approximately $10,300. Federal employees may add or adjust FEGLI coverage when life events such as marriage, divorce, death of a spouse, or the acquisition of an eligible child occurs. Federal employees may also add or adjust coverage when OPM offers open seasons, although OPM officials noted that these periods are rare. FEGLI most recently offered open seasons in 1999 and 2004. Employees who opted out of FEGLI coverage upon starting federal employment may also add coverage during these times. Additionally, if at least a year has passed since an employee opted out of FEGLI, an employee may request FEGLI coverage by providing medical information via a form partially completed by the employee’s physician. Employees are responsible for any associated expenses such as a physician’s fee. In addition, certain employees of the Department of Defense are eligible to elect FEGLI coverage without experiencing a qualifying life event or by providing medical information. When a federal enrollee with FEGLI coverage dies, MetLife’s OFEGLI pays claims to the federal enrollee’s designated beneficiary. If no beneficiary has been designated, payments will be made roughly in the following order pursuant to statute: to the enrollee’s surviving spouse; if none, to the child or children in equal shares; if none, to surviving parents in equal shares; if none, to the executor or administrator of the employee’s estate; or, if none, to the enrollee’s next of kin as determined by applicable state laws. The enrollee’s beneficiary or other survivor must follow a prescribed process for filing a claim and receiving payment that begins with contacting the human resources office at the insured’s agency to report the death, submitting a certified death certificate, and submitting a Claim for Death Benefits form. According to FEGLI materials, beneficiaries may choose a payout by receiving a lump- sum check or an RAA. According to the American Council of Life Insurers (ACLI), RAAs have existed since 1982, and many insurers provide them for both group and individual life insurance policies. When an insured person dies, the life insurance company that issued the policy may place the death benefit proceeds into an RAA, which accrues interest for the beneficiaries from the day the account is established for as long as the funds remain in the account. Beneficiaries have full and immediate access to their funds and can withdraw some or all of the funds at any time without penalty. In addition, MetLife pays RAA accountholders a minimum guaranteed interest rate that typically is calculated using one of several market rate indexes. MetLife compounds interest on RAAs daily and credits that interest monthly. MetLife issues a book of drafts to the beneficiary, allowing immediate access to the funds without penalty. Beneficiaries may then use them to meet various financial needs, for example to pay bills, make retail purchases (fig. 1), or transfer funds from the RAA to another account, such as a savings or checking account. FEGLI beneficiaries, like other life insurance beneficiaries, may leave funds in their RAA for as long as they wish or withdraw the entire amount at any time, and there are no maintenance fees associated with these accounts. By investing the assets backing the liabilities of RAAs funded with FEGLI claims payments, MetLife may earn a profit in the form of a spread, or the difference between the interest it pays beneficiaries and what it earns on invested assets backing RAA liabilities less expenses. MetLife assumes the investment risk associated with investing these assets. FEGLI’s Basic life insurance coverage shares several similarities with the coverage offered by private sector group plans. First, both FEGLI and most private sector plans automatically enroll employees in basic coverage, often including AD&D coverage, unless they opt out of the program, and both provide options for employees who opted out of the program to join later. Second, neither FEGLI nor private sector basic insurance initially requires employees to provide information on their medical condition or history. That is, any employee can enroll in the program regardless of age or state of health at the time that the employee is first eligible to join. Third, while some private sector plans offer a flat amount of basic insurance ranging from $5,000 to as much $50,000, many offer cover an amount equal to the employee’s salary or a multiple of it, as FEGLI does. Finally, FEGLI and private sector programs both typically use a composite rate structure to price their basic group life benefits; that is, a rate structure where all employees pay the same average rate regardless of age or health status. The effect of a composite rate is that all employees pay the same rate per $1,000 of insurance coverage regardless of characteristics such as age and health that impact the cost of life insurance. In addition to similarities with respect to basic coverage, FEGLI and private sector group plans generally offer some form of optional coverage that shares some similarities as well. First, employees in both FEGLI a nd private group plans typically must fund any optional coverage with employer contribution. In addition, both FEGLI and private sector employers generally offer optional coverage in increments of one to fi times the employee’s annual salary. Finally, both FEGLI and private sector plans generally offer life insurance coverage on the employee’s dependants. Unlike most private sector group life insurance plans, FEGLI, accord OPM officials, assumes most of the risk of loss associated with the program. In the private sector, according to industry experts, employers generally purchase group life insurance policies from insurers that then bear the risk of loss. That is, the insurer bears the risk that the claims associated with the policy may exceed the premiums collected from the policyholder. In contrast, according to OPM officials, the FEGLI program effectively bears all such risk based on the expectation that the FEGLI Fund is sufficient to cover claims made by FEGLI beneficiaries. ing to FEGLI’s creation contemplated the federal government purchasing group life insurance from a private sector group life insurer or insurers and mitigating the risk of loss by purchasing reinsurance for those insurers. However, as the FEGLI Fund balance has grown over time, OPM officials noted, the need for an insurer and reinsurers to assume the program’s risk of loss has diminished. For example, according to OPM and MetLife officials, even though OPM has a policy with MetLife to provide FEGLI life insurance and makes funds available to MetLife for this policy, when FEGLI beneficiaries submit claims, MetLife draws upon OPM’s FEGLI Fund to make claims payments. In addition, according to the same officials, MetLife’s exposure to loss is currently limited to its role as a reinsurer for the FEGLI program, as it covers approximately 85 percent of the FEGLI program’s reinsurance. However, this exposure would only result in payment after the depletion of the entire FEGLI Fund, which has a balance as of September 30, 2010, of $37.6 billion, or approximately 14 times the amount of FEGLI’s annual claims payments. OPM and MetLife both consider the possibility of exhausting the FEGLI Fund to be so remote that the cost of the reinsurance is negligible. While the program initially had about 160 reinsurers, only 10 were participating in 2011, with MetLife providing about 85 percent of the program’s reinsurance. OPM pays each of the 10 reinsurers approximately $500 annually for their participation in the program, and FEGLI has never had to use this reinsurance coverage. Compared with private sector group term life plans, FEGLI has certain features and benefits that can make premiums for all coverage higher for federal employees. First, FEGLI’s statute requires enrolled federal employees to pay two-thirds of the premium rate for their Basic life insurance coverage, while employers in the private sector generally cover the full cost of their employees’ basic coverage. According to insurance industry officials, the amount of basic coverage that private group plans generally provide can be a flat amount or equal to an employee’s annual salary or more. Whether an employee receives more employer-paid coverage through a private plan that pays the entire premium for some amount of coverage than through FEGLI would depend on the amount of no-cost coverage the private sector employer provides. Second, according to OPM officials, FEGLI offers federal employees a retirement life insurance benefit that is financed, in part, by a portion of the premiums charged while employees are working. FEGLI’s retirement benefit raises FEGLI premiums above those of most private sector group plans, which generally do not offer such a benefit. As we have seen, FEGLI offers a postretirement benefit for both Basic and Optional coverage. According to OPM officials, federal employees who participate in FEGLI begin prefunding, or paying in advance for, Basic retirement coverage as soon as they begin their FEGLI coverage. Prefunding for Basic coverage is necessary because newly retired employees over age 65 who choose a 75 percent reduction in this coverage are no longer required to pay premiums for the coverage they are receiving. With Optional coverage, except for Option A, employees begin prefunding the cost of their retirement benefits when they reach age 55 and continue to do so until they retire. Newly retired employees who choose a 75 percent reduction in their Option A coverage, and a 100 percent reduction in Options B and C, coverage no longer pay premiums for the Optional coverage they are receiving. In addition, life insurance coverage for people of retirement age or older can be expensive. According to private sector insurance industry participants we spoke with, the cost of postretirement benefits is quite high because as employees age, the likelihood of the insurer being required to pay a claim also increases. As a result, few private sector plans offer such benefits. While OPM has stated that having flexible benefits, including life insurance coverage in retirement, contributes to employee retention, insurance industry participants with whom we spoke said that they have not seen any evidence that postretirement coverage attracted or retained employees. In addition, for certain individuals, FEGLI Basic coverage may appear more costly than private sector basic life insurance. First, FEGLI features level premiums that may not be a part of some private individual policies. With such a feature, monthly premiums remain the same over time instead of increasing with age. Compared to a policy without such a feature, level premiums are higher earlier in life and then become lower at a certain point. If relatively younger federal employees compare FEGLI to private individual coverage without level premiums, FEGLI coverage may appear to be more costly, depending on their age. Second, because FEGLI is a group life program, all individuals pay the same premiums regardless of their health status, unlike individual coverage where premiums generally depend on the health of the person being insured. As a result, if relatively healthier federal employees compare FEGLI to private individual coverage, FEGLI coverage could also appear more costly. Finally, FEGLI’s postretirement coverage, which increases FEGLI premiums but is not generally part of private plans, also contributes to FEGLI’s cost relative to private sector alternatives that do not feature this coverage. The possibility that FEGLI coverage may appear more costly than private sector alternatives to relatively younger or healthier federal employees is mitigated to some extent by the extra amount of coverage FEGLI provides federal employees under age 45. However, in cases where FEGLI’s premiums exceed those for similar coverage in the private sector, federal employees may conclude that FEGLI is more expensive and choose to opt out of the program. While FEGLI disclosures cover many key aspects of the program, they do not cover certain program features that could affect an employee’s decision to purchase FEGLI coverage. Consistent with OPM’s strategic goal of helping ensure that federal employees fully understand their benefits, and with the National Association of Insurance Commissioners’ (NAIC) guidance on informative marketing materials, OPM provides a significant amount of information on FEGLI through a handbook, program booklet (a condensed version of the handbook for employees), and website. These disclosures provide information on a range of topics, including enrollment, coverage options and costs, designation of beneficiaries, claims and claims payments, and resources for employees if they have questions or issues. OPM provides this information in hard copy and through the FEGLI website, which also includes a calculator that allows users to determine premiums for various combinations of life insurance coverage. Providing timely and informative FEGLI guidance materials to federal agency human resources staff is another means through which OPM seeks to ensure that federal employees understand their benefits. While these disclosures are useful, they do not make employees aware of some FEGLI benefits and features that could affect their decision to participate in the program. The disclosures do not inform employees that premiums for Basic coverage include a postretirement benefit and that employees prefund this benefit. Employees who are unaware of this prefunding element could decide that FEGLI coverage is too expensive, decline participation in the program, and not receive FEGLI’s potentially valuable insurance benefits. Conversely, employees that plan to work in the federal government for only a short period, or at least not through retirement, could decide to participate in the program, not knowing that they would be paying for a benefit they would never receive. FEGLI disclosures, while showing a constant premium rate, do not make employees aware of the level-premium feature of the program’s Basic coverage that spreads premiums equally over the duration of the policy rather than charging less during early policy years and more in later policy years. Employees unaware of this feature could conclude that FEGLI coverage is more expensive than alternative private sector coverage, particularly in the earlier years of the policy, and decide to opt out, foregoing potentially valuable life insurance coverage. The disclosures also do not convey to federal employees that, for Basic coverage, FEGLI charges a composite premium that averages the cost of insurance for all participants regardless of age or health. That is, participants pay the same regardless of whether they pose a lesser or greater risk of loss. This averaging can be of great benefit to some, especially those who may not be able to obtain coverage elsewhere. However, as with the level-premium feature, those not aware of this feature could conclude that FEGLI coverage is simply more expensive than alternative private sector coverage and forego coverage they might not be able to obtain elsewhere. According to OPM officials, OPM performs many FEGLI administrative and operational functions, including collecting premiums, overseeing FEGLI’s claims settlement process (which MetLife administers), and publishing FEGLI’s regulations and disclosures. The same officials said that FEGLI premiums are collected by withholding premiums from enrollees’ paychecks, annuities, or compensation and collecting agency contributions from employing agencies or retirement systems, as applicable, for deposit by OPM into the FEGLI Fund. On a monthly basis, premiums are moved from the FEGLI Fund which is held by the Treasury Department, into a letter of credit account, which is administered by a Federal Reserve Bank and from which MetLife can draw down funds to pay claims. MetLife’s OFEGLI, which is responsible for paying claims to beneficiaries, draws money from the FEGLI Fund on a monthly basis using the line of credit and transfers claims payments to beneficiaries. In addition to its premium collection function, OPM officials said OPM is also responsible for investing FEGLI Fund assets in government securities and ensuring that investment income on program assets is taken into account when determining program costs. Funds that flow through FEGLI, according to these officials, ultimately begin with employee and agency premiums and end with a payout to beneficiaries in the form of a check or an RAA. Figure 2 illustrates the flow of FEGLI funds between those endpoints, including being held in the FEGLI Fund. In addition to managing FEGLI resources, OPM officials said they monitor and oversee MetLife’s claims settlement processes by receiving and reviewing weekly reports on claims activity. In addition to managing processes for dispersing FEGLI funds, OPM officials said they receive annual financial reports on claims and administrative costs that are used to determine the timeliness of payments and, as noted earlier, help predict future claims and other expenses. In addition to producing and updating FEGLI’s Handbook, Program Booklet, website and forms, OPM officials said that OPM also issues FEGLI regulations, including the Life Insurance Federal Acquisition Regulation (LIFAR), that guide the program’s operations. The regulations, for example, outline the types of Basic and Optional insurance available through FEGLI, the amounts of FEGLI coverage that the program offers, eligibility requirements, program costs, and beneficiary designation. Additionally, the LIFAR describes the terms of the contractual arrangement between OPM and MetLife under the FEGLI program, including MetLife’s receipt and administration of claims and the calculation of administrative costs and profit levels. The LIFAR also provides guidance on contract oversight, including requiring policies and procedures to help ensure that FEGLI services conform to the contract’s quality requirements, and an OPM evaluation of MetLife’s system of internal controls. Additionally, the LIFAR requires that MetLife develop a quality assurance program that includes procedures to address (1) timeliness of claims payments to beneficiaries, (2) quality of services and responsiveness to beneficiaries and OPM, and (3) detection and recovery of fraudulent claims, among other things. Although FEGLI’s statute exempts the program from contractual competitive bidding, the LIFAR also provides direction on contract modifications and circumstances that would allow for contract termination. According to OPM officials, they fulfill these requirements by monitoring consumer feedback, tracking the timeliness of claims payments, and reviewing external audits of MetLife, which include OFEGLI. These officials said that they have not received any indication of problems with timeliness or responsiveness, or indications of any other deficiencies. Although OPM has numerous administrative and oversight responsibilities for FEGLI, MetLife, according to its officials, has a central role in several key FEGLI financial and claims administration functions. First, officials said that MetLife works with OPM on an annual basis to develop a monthly premium amount. This premium is the amount made available to MetLife to pay claims, MetLife’s administrative expenses, and MetLife’s service charge. MetLife annually conducts a review of claims paid and recommends a premium amount to OPM based on the projected level of claims and expenses for the upcoming fiscal year. Officials noted that OPM and MetLife then agree on a total annual premium level for FEGLI, which OPM then uses to determine rates for employees and federal agencies. Second, OPM officials said that MetLife plays a key role in receiving life insurance claims from FEGLI beneficiaries, processing these claims, and ensuring that beneficiaries receive their life insurance settlements. On a daily basis, MetLife officials said that they determine how much they need to withdraw from the letter of credit account to meet expenses associated with beneficiaries’ use of their RAAs. In addition, OPM officials said MetLife prepares weekly and annual financial reports on its FEGLI claims that provide important information on the flow of funds from the FEGLI Fund to MetLife and from MetLife to beneficiaries. OPM reimburses MetLife for its administrative expenses for FEGLI, including its claims and financial functions. OPM officials said that most of these expenses are the result of MetLife’s OFEGLI, through which MetLife processes and pays claims. In 1997, according to MetLife officials, OPM and MetLife entered into an agreement that capped MetLife’s direct administrative expenses for FEGLI at $6.1 million and indirect expenses at 20 percent of that ceiling. This ceiling is adjusted annually by the Urban Consumer Price Index. In addition to administrative expenses, officials said that MetLife receives a service charge for adjudicating and administering FEGLI claims. This service charge is calculated using the profit analysis factors found in the LIFAR. For fiscal year 2011, according to OPM officials, MetLife’s service charge was $965,000. Under OPM’s administration of the FEGLI program, according to OPM officials, program funds have been sufficient to pay life insurance claims and meet program liabilities. According to OPM officials, one of their key responsibilities is to determine FEGLI’s liability for current and future life insurance coverage and to take steps to ensure that sufficient assets are available to meet these potential liabilities. Various factors affect how these liabilities are calculated, including changes in the mortality of federal employees, federal salaries, and interest rates. OPM actuaries said that they use these factors as part of an actuarial valuation model to make annual estimates of FEGLI’s current and future liabilities. The actuaries then estimate the funds needed from premiums to cover these liabilities and program expenses, taking into account interest on retained funds and the FEGLI Fund balance. In addition, according to OPM officials, OPM actuaries monitor and annually review the claims experience for each FEGLI insurance coverage option, by age group and gender, and make recommendations to OPM senior management on the premium rates employees and their agencies should pay. According to OPM officials, the FEGLI program is adequately funded if FEGLI revenues meet or slightly exceed program costs and the program’s assets meet or exceed its liabilities. Figure 3 shows OPM data on FEGLI’s assets and liabilities from 2000 to 2010, and appendix II provides additional information on FEGLI’s annual premiums, claims, and investment income. In particular, OPM reported in its 2010 annual report that the program’s liabilities as of September 30, 2010, were approximately $43.9 billion and that its assets totaled $39.2 billion. According to OPM officials, while the reported data would appear to indicate that the program was underfunded, they believe FEGLI’s financing is adequate because the overall liability amount reported above does not take into account employee contributions for optional insurance coverage, which has the effect of making the liability appear to be larger than it actually is. According to OPM, they take these funds into account in other internal analyses, and these analyses show that the program’s assets sufficiently meet the program’s liability when employee contributions are considered. The legislation that created FEGLI intended the program to offer a low- cost insurance benefit to federal employees and their families. Specifically, the statute that created FEGLI described the program as an insurance benefit for federal employees that provides insurance at rates OPM determines are generally consistent with the lowest basic premium rates for new policies issued to large employers. Further, FEGLI’s legislative history suggests that the program’s purpose is to provide low- cost group life insurance to federal employees. In addition, OPM’s most recent strategic plan calls for ensuring that available benefits, including life insurance benefits, align with employees’ needs. As we have seen, however, FEGLI has features—some required by statute—that can make its coverage more expensive for federal employees compared with the type of coverage generally offered by private group life insurance programs. For example, as noted earlier, FEGLI requires an employee contribution for Basic insurance, something generally not required in private sector plans. In addition, the program features a postretirement benefit that, although not generally found in private sector plans, does increase the premiums that FEGLI participants must pay. OPM officials told us that they periodically compare FEGLI to other large group life insurance plans, primarily in terms of coverage levels, and have concluded that the features and benefits FEGLI offered are on par with those offered by private sector plans. In addition, OPM officials noted that key FEGLI characteristics such as coverage levels, the portion of the cost paid by federal employees, and the structure of Basic premiums are determined by FEGLI’s statute, and as a result, changing the program can involve statutory changes that require congressional action. They further noted that because of the program’s size, the limited number of OPM staff available to administer the program, the amount of administrative work involved in making a change to the program, and the potential need for the FEGLI statute to be changed, altering program processes is not a simple task. OPM officials said that because of various concerns, such as the length of time required for legislative changes, inherent costs incurred with structural program modifications, and their interest in preserving program continuity, requests for significant changes are minimal and made only after careful consideration. However, OPM is able to make changes to FEGLI premium rates paid by federal employees and agencies, as well as other changes including options available to beneficiaries for receiving claims payments. Nevertheless, OPM did not appear to have a systematic or documented process, or requirements, for comparing FEGLI with private sector plans. In addition, OPM did not have a methodology or criteria with appropriate benchmarks or measures for consistently comparing FEGLI benefits with those provided by the private sector. The results of such analyses could be used, for example, to make changes to the program within OPM’s authority or, potentially, suggest legislative changes to Congress. Since the last premium adjustment, OPM actuaries have recommended changes—both increases and decreases—to FEGLI premium rates. As we have seen, each year OPM actuaries review and analyze FEGLI’s assets and liabilities to determine the sufficiency of program assets to cover life insurance benefit costs for all FEGLI enrollees. In addition, the actuaries analyze the claims experience associated with each type of coverage and age band and determine appropriate premium rates, which may be higher or lower than the existing rates. OPM actuarial and financial officials present the results of these analyses and any rate change recommendations in an annual meeting with OPM management that includes the FEGLI contracting officer, actuaries, financial staff, and other OPM senior management. According to OPM officials, OPM senior management then has the authority to decide whether to raise, lower, or hold constant the rates that employees and agencies pay for FEGLI insurance. However, according to OPM officials, OPM management decided not to make these rate changes because they believed they introduced more complexity for FEGLI participants and entailed administrative changes that, at the time, were not practical given the significant resources required. Standards for internal control in the federal government state that policies and procedures should exist for ensuring that findings from any audits or reviews are promptly resolved and that all transactions and other significant events are clearly documented. OPM’s annual actuarial reviews are effectively an internal control designed to help ensure the accuracy and adequacy of premium rates. However, OPM does not appear to have a documented process providing guidance on what to include in the annual actuarial reviews and recommendations to management. In addition, it does not have a process for documenting management’s decisions with respect to those recommendations, including any accompanying rationale. Management’s decisions on the actuarial findings are significant events because of their potential effect on the financial condition of the program and its ability to pay claims to beneficiaries. Without documented processes for actuarial and financial reviews and their disposition, OPM risks compromising the efficiency and the effectiveness of these reviews and being unable to help ensure premiums are consistent with program experience. RAAs had been the default method used from 1994 until February 2011 for many FEGLI beneficiaries to receive their life insurance settlements. RAAs became the default option in 1994 for payments over $7,500 after MetLife requested that OPM allow RAAs to be used in addition to lump- sum check payments. OPM granted the request under certain conditions, including RAAs be provided as additional benefits to FEGLI beneficiaries at no additional cost. OPM officials noted that the change to RAAs reduced administrative costs, including for staff time and materials that were associated with issuing lump-sum checks. In February 2011, OPM changed the FEGLI life insurance settlement process, requiring beneficiaries to choose between receiving a lump-sum check or an RAA when receiving a settlement. Specifically, OPM revised the form that FEGLI beneficiaries submit for a life insurance claim, removing the default option and requiring beneficiaries to affirmatively choose a lump-sum payment or an RAA for settlement amounts over $5,000. OPM officials said that they made this change after reviewing RAA practices and procedures and published concerns about RAA practices. Two major life insurers with whom we spoke said that making the RAA payment method optional can have a considerable effect on consumers. When consumers have the option to choose between RAAs and lump-sum check payments, the overwhelming majority choose lump- sum check payments. According to several insurance companies and OPM, RAAs can benefit beneficiaries, but others expressed concerns about the extent of RAA disclosures and consumer protections. Industry participants cited flexibility and a guaranteed interest rate as the primary benefits of RAAs. For example, some said RAAs offer beneficiaries flexibility during a difficult time and loss to determine how best to use or invest the life insurance proceeds, which are often sizeable sums. While deciding how to use the funds, beneficiaries with RAAs receive a guaranteed minimum interest rate on their RAA account. According to MetLife officials, for FEGLI, each beneficiary’s minimum interest rate is based on when the RAA was opened and is guaranteed for as long as the beneficiary maintains the RAA. According to the same officials, the guaranteed interest rates are 3.0 percent for RAAs opened before April 2003, 1.5 percent for RAAs opened April 2003 to April 2009, and 0.5 percent for RAAs opened after April 2009. The officials noted that even the most recent interest rate paid on RAAs is competitive compared to what beneficiaries could currently earn on similar alternative investments. For example, as of September 26, 2011, the best available rates for a money market account ranged from .10 percent to 1.10 percent. In addition, they said that RAAs provide beneficiaries the ability to access their funds at any time, including the opportunity to withdraw either partial amounts or the entire amount. Despite these benefits, RAA disclosures in general do not convey some important information to consumers, including information on options beneficiaries have for receiving their life insurance settlement funds. For example, the disclosures do not clearly indicate that OPM considers life insurance claims to be closed, and its relationship with beneficiaries ended—as it is with a lump-sum payment—once a beneficiary chooses an RAA as a settlement option. In addition, beneficiaries may not understand that RAAs involve a separate contract with MetLife that is not part of the FEGLI program and is regulated by states rather than the federal government. Regulatory officials we interviewed from one state said that consumer choice and product understanding is critically important to consumers and that that state’s law, in force since the mid- 1990s, requires companies to offer beneficiaries a choice of life insurance settlement options at the time life insurance claims are submitted. The same officials noted that RAAs cannot be the default life insurance settlement vehicle in their state. Three other states’ regulators were concerned about how well beneficiaries understood RAAs and one of these states had recently passed a bill that required RAA disclosures to include information on settlement options available to beneficiaries. Another part of the bill prevents insurance companies from offering RAAs as their default settlement option. Regulatory officials from another of these states said that their office had undertaken a regulatory review and was developing guidance for insurance companies on using RAAs. In addition to concerns about RAA disclosures, some industry participants and a federal regulator expressed concern about the kinds of protections that apply to RAAs and how well beneficiaries understand them. For example, they indicated that while RAAs are not insured by the Federal Deposit Insurance Corporation (FDIC), the use of drafts that closely resemble checkbooks offered by banks could give the appearance that FDIC insurance protects RAAs. Others noted that whether state guaranty funds were adequate to fully protect those with RAAs is unclear. Industry officials we spoke with noted that state guaranty funds typically protect RAAs up to a limit of $300,000, although in some states that limit may be as high as $500,000. An insurance industry expert explained that beneficiaries who have RAA assets that exceed state guaranty fund limits may not be fully protected. According to OPM, approximately 25 percent of federal employees covered by FEGLI have insurance in force of $300,000 or more. Other officials noted that state guaranty fund protections are not the same as FDIC insurance. Each FDIC-insured account is protected; therefore, consumers with multiple accounts can be protected above the $250,000 FDIC limit in the aggregate. Conversely, state guaranty funds limit an individual’s payout protection to the statutory ceiling so consumers with multiple retained asset accounts are not protected beyond it. In late 2010, NAIC and the National Conference of Insurance Legislators (NCOIL) addressed concerns about RAAs by issuing guidance intended to improve disclosures to consumers. In December 2010, NAIC issued a model bulletin for use by state insurance regulators to establish standards for disclosing information about the payment of life insurance benefits with RAAs. For example, under the bulletin, disclosures should clearly state that choosing an RAA involves establishing a supplemental contract with an insurance company that is distinct from the life insurance policy. The bulletin also notes that the supplemental policy should also provide clear disclosures of the rights of the beneficiaries and the obligations of insurers. Other key provisions in the bulletin included making sure disclosures explain available settlement options for beneficiaries, applicability of FDIC protections, applicable RAA fees charged by insurers, guaranteed interest rates associated with RAAs, provision and use of draft books, frequency of financial statements to beneficiaries, and policies for inactive RAA accounts. Around the same time, NCOIL released its Beneficiaries’ Bill of Rights, a document which was intended to improve not only disclosures associated with RAAs but also transparency and accountability. NCOIL’s new standards echoed many of NAIC’s proposed improvements and also included provisions that, if adopted, would require insurers to refer beneficiaries to their state insurance departments if they had further questions about RAAs, immediately return to beneficiaries remaining RAA balances if accounts became inactive during a 4-year period, make clear that any violation of NCOIL’s Bill of Rights would constitute a violation of states’ unfair trade practices law, identify any financial institution or entity that administers RAAs on the insurer’s behalf, and report annually to state regulators on the number and dollar amount of RAAs held, RAA structure and investment earnings, interest rates paid to beneficiaries, and numbers and dollar amounts of RAAs that go through state unclaimed property processes. Some states already have regulations in place that specifically address RAAs and others have recently taken action to address concerns about the accounts. For example, according to NAIC, as of August 2011, 26 states had RAA-related statutes that allowed insurance companies to establish RAAs for beneficiaries and hold life insurance assets in these accounts. In addition, according to NAIC, 22 states had RAA-specific regulatory protections and disclosures, including many of the provisions found in NAIC’s model bulletin. According to NAIC, many states either enacted or updated RAA regulations since the beginning of 2010. Figure 4 provides information on how states have approached regulating RAAs. While OPM recently revised and improved the FEGLI RAA disclosures beneficiaries receive, the disclosures still lack some important information. In February 2011, OPM improved FEGLI disclosures, particularly the form that beneficiaries must use to file a claim. FEGLI disclosures now inform beneficiaries that they have settlement options and include language stating that beneficiaries have an important choice to make in choosing between a lump-sum check and an RAA and indicating their choice on their claims form. In particular, the new form explicitly states that MetLife offers a guaranteed minimum interest rate that may be better or worse than the market’s prevailing interest rate and, unlike in the previous version, clearly informs beneficiaries that MetLife may profit from RAAs. OPM further improved disclosures by more clearly explaining that beneficiaries can access the total amount of their funds at any time with no cost and by improving the information on applicable protections. For instance, the disclosures now explicitly state that RAAs are not bank accounts and are not insured by FDIC or any other federal agency. They also explain that MetLife guarantees all RAA accounts, including interest earned, and that this guarantee is backed by state insurance guaranty associations. Despite OPM’s improved disclosures, they continue to lack some important information. In addition to failing to mention the aforementioned separate RAA contract between FEGLI beneficiaries and MetLife, OPM’s revised disclosures do not tell beneficiaries how to identify and contact the proper state department of insurance regulation should they have any questions or concerns about their RAAs. FEGLI beneficiaries may not clearly understand that OPM oversees all aspects of FEGLI prior to settlement but that state regulators become responsible thereafter. In the event that beneficiaries have questions or face issues with an RAA, they may not know where to turn for regulatory assistance. Further, there may be differences of opinion among regulators about who is the responsible regulator, making such guidance even more important to beneficiaries do not provide information on how to identify the relevant state guaranty fund and its applicable limits, or where to find additional information on a particular state’s fund. According to the National Organization of Life and Health Guarantee Associations, beneficiaries whose RAA accounts contain more than their state guarantees may be at risk of leaving some funds unprotected. It is important for beneficiaries to be able to identify the relevant state insurance regulator and guaranty fund in case they have questions or issues regarding their RAAs and associated guarantee fund protection. However, identifying the appropriate regulator is challenging because some regulators differed on what type of instruments RAAs are, as well as who regulates them. For example, according to two state regulators and NAIC officials, RAAs are supplemental insurance contracts between beneficiaries and insurance companies. However, two other states’ regulators classified them as settlement options. Yet another state’s regulator said that RAAs were both supplemental contracts and settlement payouts of existing life insurance policies. States also differed on the time frame for considering insurance contracts and settlements settled. OPM officials said that FEGLI life insurance claims were satisfied as soon as beneficiaries established RAAs, and two of the five state regulators with whom we spoke shared that view. However, regulatory officials we interviewed from one state said that the original insurance contract was not satisfied until all funds were withdrawn from the RAA. The state insurance regulators and some industry officials with whom we spoke also differed on which state’s regulator oversees a particular RAA account and, as a result, which state’s guaranty fund would apply. For example, two states’ regulatory officials and NAIC representatives said that the relevant regulator would be the one from the state where the beneficiary resided. However, two other states’ officials said it would be the state where the original group life insurance policy was issued, and yet another state regulator as well as officials from the National Organization of Life and Health Guaranty Associations said it would be the state where the group life insurer was domiciled. A representative from a life insurance industry association with whom we spoke said that the appropriate regulator could be the one from the state where the insurance contract was established, where the beneficiary resided, or both. Without clarity on which state insurance regulator has jurisdiction over an RAA held by a FEGLI beneficiary, or which state guaranty fund might apply, beneficiaries may not know where to turn to find answers to RAA- related questions on the extent of protections applicable to their RAA. For example, the underlying FEGLI policyholder (the federal government) is located in Washington, D.C.; the RAA provider (MetLife) is domiciled in New York; and federal employees and their beneficiaries can live anywhere in the United States. Identifying which state has jurisdiction over a MetLife RAA contract, and which state guaranty fund applies, could be difficult, especially if the state regulators themselves might not agree on the proper jurisdiction. And as we have seen, state guaranty funds provide varying levels of protection. According to OPM officials, determining the appropriate state regulator for RAAs is technically beyond their purview because their involvement ends once the RAA is funded with the FEGLI claim payment. However, OPM does work with MetLife to create the disclosures that provide beneficiaries with information that helps them determine whether or not they wish to select an RAA as their settlement option. Information concerning the relevant state regulator and guaranty fund would be important to have in deciding whether or not to choose an RAA because it could determine the amount of protection available to the beneficiary. In addition, it could also inform the beneficiary of potential challenges in seeking regulatory assistance if, for example, the beneficiary is located in one state but the relevant regulator is located in a different state. In contrast to some private insurers with whom we spoke, OPM does not consider any of the income MetLife earns on FEGLI RAAs when determining premium rates for FEGLI coverage. Some insurance company representatives we interviewed said that they considered all investment income, including income earned on RAAs, when determining the premium rates for their life insurance policies, and that this income typically had the effect of reducing the premiums insurers charge or defraying other related costs. While officials from two companies with whom we spoke said that they considered RAA earnings when pricing their overall group life insurance plans, other insurers suggested that investment income from their RAA accounts was too small to affect their rate-setting calculations. Because OPM contracts with MetLife for settlement services, RAAs funded with FEGLI claims payments are established and operated by MetLife. As a result, MetLife retains investment gains and losses earned on these accounts, as do most private insurers. According to OPM officials, because OPM considers a FEGLI claim to be fully paid when a MetLife RAA is established, OPM has no connection to the RAA accounts or any of their funds. In addition, OPM does not track any data related to MetLife’s FEGLI-based RAAs. However, these RAAs are established with FEGLI claims payments, and by not considering the income earned on the accounts by MetLife, OPM may be missing an opportunity to offset program expenses and potentially reduce premium rates. While MetLife officials said that they could not specifically determine the amount of investment gains and losses on FEGLI-funded RAAs, they did say that as of December 31, 2010, RAAs maintained for FEGLI beneficiaries totaled approximately $3.5 billion. According to MetLife’s 2010 annual financial statements, the company had a total of approximately $12 billion in FEGLI and non-FEGLI RAA accounts at year end and had earned approximately $267 million in net investment income on those accounts. The same officials also said that the company must meet costs and expenses associated with these RAAs, including the payment of guaranteed interest rates to FEGLI beneficiaries, and that by guaranteeing the minimum rates previously noted, MetLife has assumed financial risk. The same officials noted that these guaranteed rates are higher than most rates of return beneficiaries could currently receive through a bank or other liquid investment vehicle. In addition, MetLife would pay and has paid interest at a higher rate than the guaranteed minimum rates in more favorable interest rate environments, and according to officials, approximately 40 percent of FEGLI RAAs have been open for 5 or more years. This higher retention percentage, they said, may be partially due to advantageous rates MetLife is paying those beneficiaries. In contrast, several other life insurers with whom we spoke said that RAAs are often a short-term option for beneficiaries, and that beneficiaries typically close their RAAs within 1 to 2 years. Because life insurance is an important purchase for those seeking to protect their dependents, prospective buyers need to fully understand the details of the policy they are considering. Although OPM provides significant information on its life insurance program, some information that could influence federal employees’ decision to buy FEGLI coverage is lacking. First, although FEGLI offers federal employees postretirement coverage, a benefit not commonly found in private sector group plans, FEGLI disclosures do not explain the effect of this benefit on premium levels, particularly the fact that federal employees begin prepaying for this coverage as part of their Basic insurance when they begin their employment. As a result, employees may be unaware that their premiums may be higher than those of group plans that do not offer such coverage. Second, the disclosures do not discuss FEGLI’s level-premium and composite rate structure for Basic coverage. Because these features can make FEGLI premiums look more expensive than private individual coverage without them, especially to younger and healthier individuals, some employees might conclude that FEGLI coverage is not a beneficial choice and pass up a potentially valuable benefit. Conversely, someone planning to work for the federal government for a short period of time might purchase FEGLI coverage without realizing that the coverage includes a retirement benefit they may not receive and will likely cost more than a group policy without such a benefit. Since FEGLI’s inception, OPM has sought to provide life insurance benefits that meet federal employees’ needs at reasonable costs. While OPM has conducted some periodic comparisons of FEGLI benefits and premiums with those found in other group life plans, without formal, documented processes for these comparisons, OPM risks that FEGLI may not meet employees’ needs, that its premiums may exceed prices charged for similar benefits in the private sector, or even that it may be offering features that it does not need to offer to be competitive with private sector group plans. For example, many private sector employers no longer offer postretirement benefits in their group life plans because of the cost. To help ensure that FEGLI premium rates are appropriate, OPM officials said that OPM actuaries annually review and assess FEGLI’s claims experience across different plans and age groups, recommending rate changes when they believe such changes are necessary. However, OPM lacks documented processes for making such recommendations and documenting management’s disposition of any rate change recommendations. Without a clear and consistent process for making, reviewing, and implementing rate change recommendations, OPM risks that needed changes may not be made and that the premiums charged to federal employees may not reflect the coverage they are receiving. FEGLI now offers two settlement options—a lump-sum check payment or an RAA—and it is important for beneficiaries to be able to choose the option that best meets their needs and to know where to turn to resolve any issues they might have. While RAAs may offer benefits that some beneficiaries appreciate, such as certain flexibilities and a guaranteed interest rate, they also have certain characteristics that need to be fully disclosed. OPM has recently revised its disclosures to beneficiaries to provide more information on RAAs, but the disclosures still do not contain some important information. For instance, they do not explicitly state that RAAs involve a new contract between beneficiaries and MetLife that is regulated by states rather than the federal government and that involves state-based protections with certain limitations. As a result, FEGLI beneficiaries may be unaware that new contractual terms and conditions govern their RAAs. They also may not fully understand how their RAAs are protected and what the limitations of that protection might be. Finally, the disclosures do not provide the information that beneficiaries need to find the proper regulator should they have questions about their accounts—a problem that is complicated by the fact that the regulators themselves may disagree over which one has jurisdiction. To help better ensure that federal employees have all the information they need when deciding whether to purchase life insurance through FEGLI, we recommend that the Director of the Office Personnel Management take steps to ensure that FEGLI disclosures include complete and accurate information on key benefits and features, including the program’s postretirement coverage, composite rates, and level-premium structure. To help ensure that FEGLI provides relevant benefits that meet the needs of federal employees at a reasonable and appropriate cost, we recommend that the Director of the Office of Personnel Management develop and implement a more structured process for comparing FEGLI with private sector group life insurance plans and for documenting OPM actuaries’ rate recommendations and any management decisions concerning those recommendations. To help ensure that FEGLI beneficiaries are provided with information on all relevant aspects of selecting an RAA as a FEGLI settlement option, we recommend that the Director of the Office of Personnel Management include more complete information on financial protections and regulatory oversight in program disclosures, working as necessary with MetLife and NAIC to determine the appropriate state regulator for beneficiaries and their RAAs. On October 7, 2011, we provided a draft of the report to OPM for comment. On October 28, 2011, OPM provided written comments, which are reproduced in full in appendix III. OPM concurred with the recommendations in the report and also provided technical comments, which we incorporated as appropriate. OPM concurred with our first recommendation that it take steps to ensure FEGLI disclosures include complete and accurate information on FEGLI’s key benefits and features, including postretirement coverage, composite rates, and level-premium structure. Specifically, OPM stated that it strives for FEGLI transparency and will take steps to provide more information on key FEGLI features to ensure federal employees have the information they need to make an informed benefit decision. OPM also concurred with our second recommendation that OPM develop and implement a more structured process for comparing FEGLI with private sector group life insurance plans and for documenting OPM actuaries’ rate recommendations and any management decisions concerning those recommendations. Specifically, OPM stated that it believes that benchmarking federal benefits programs, including FEGLI, with other employer-provided benefits is essential to ensuring that the federal government can recruit, retain, and honor a world-class workforce. Finally, OPM concurred with our third recommendation that OPM include more complete information on financial protections and regulatory oversight, working as necessary with MetLife and NAIC to determine the appropriate state regulator for beneficiaries and their RAAs. Specifically, OPM stated that it has updated the FEGLI claims form and website to provide more information about the choice for FEGLI beneficiaries to receive a lump-sum check or RAA and will ensure that the best information is available to assist beneficiaries in their decision-making process. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees, the Director of the U.S. Office of Personnel Management, and the Chief Executive Officer of the National Association of Insurance Commissioners. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-7022 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix IV. To describe and evaluate the Federal Employees’ Group Life Insurance (FEGLI) program’s key operational and financial components, we examined FEGLI’s authorizing statute and associated regulations, including the Life Insurance Federal Acquisition Regulation (LIFAR). In addition, we reviewed the program’s key policy documents, including the FEGLI Handbook, FEGLI Program Booklet for Federal Employees, FEGLI website, and the contract between the Office of Personnel Management (OPM) and the Metropolitan Life Insurance Company (MetLife). We focused on how FEGLI provides life insurance coverage to federal employees and their families and the cost of that insurance to federal employees and their respective agencies. Interviews with OPM and MetLife officials provided additional information on FEGLI operations, including the program’s coverage options; how the government, MetLife, and reinsurers bear insurance risk; and how the Employees’ Life Insurance Fund—FEGLI’s main financial fund—is used for paying life insurance claims and other program costs. We reviewed data from OPM annual financial reports and performance and accountability reports from 2000 to 2010 to analyze FEGLI’s assets and liabilities. In addition, we reviewed information in the U.S. Budget on FEGLI from fiscal years 2002 to 2012 to analyze FEGLI premiums, claims payments, and investment income. We also reviewed MetLife financial statements to determine the total dollar amount of MetLife’s retained asset accounts (RAA) and the total investment income MetLife derives from its RAA investments. Because these are audited documents and financial statements, with unqualified audit opinions, we found data from these documents and summary statistics from OPM and MetLife to be reliable for the purposes of this report. In addition, to determine how FEGLI’s structure and operations compare to large private sector group life insurance plans, we compared FEGLI to plans offered by six large private sector group life insurers. Our comparison focused on insurance coverage options, processes for determining premiums, available settlement options, and methods for establishing capital or surplus levels. We selected these insurers based on various insurer characteristics including their group life insurance market share, number of group life policies and certificates issued, and whether or not they provided group life insurance to federal employees. We also interviewed officials from the National Association of Insurance Commissioners (NAIC) and the American Council of Life Insurers (ACLI) to gain their perspective on group life insurance plans, finances, and operations. For additional information on how private sector group life plans are structured and the insurance they offer, we met with insurance regulators and benefits administrators from the states of California, Florida, New York, North Carolina, and Maryland. We selected this sample of states because it is geographically diverse, includes states of domicile for several large insurance companies that sell a significant number of the industry’s group life insurance policies, has a large number of federal employees, and contains some states that have RAA regulations and others that do not. In addition, we met with representatives from two private companies with experience in insurance brokerage, and human capital and benefits consulting. To describe and evaluate OPM’s oversight of the FEGLI program, we (1) reviewed FEGLI’s authorizing statute and regulations, including the LIFAR, (2) reviewed OPM’s program monitoring, reporting, and other oversight activities, (3) interviewed OPM and MetLife officials, and (4) met with industry association representatives. We focused on the steps OPM takes to periodically monitor and review FEGLI’s financial condition, and on OPM processes for overseeing MetLife functions for receiving, adjudicating, and paying claims to FEGLI beneficiaries. In addition, to identify possible regulatory and consumer protection issues with group life insurance plans and settlement vehicles, we met with representatives from NAIC, ACLI, and a consumer advocate from the Center for Economic Justice. To determine how states generally regulate group life insurance plans, we met with insurance regulators from the five states described earlier and compared FEGLI oversight with state regulation of private group life insurers and identified similarities and differences. To describe and evaluate the role of RAAs in FEGLI’s settlement process, we examined key OPM disclosures, including the FEGLI Handbook, FEGLI Program Booklet for Federal Employees, FEGLI website, and Strategic Plan, 2010-2015, and we interviewed OPM officials. To understand the kinds of information beneficiaries receive on life insurance settlement processes, we also reviewed MetLife’s Welcome Kit for RAAs and interviewed MetLife officials. We focused on (1) what RAAs are, how they function, and how they are funded, (2) the kinds of RAA disclosures OPM and MetLife provide and how clearly they help beneficiaries understand their use, and (3) what RAA protections apply to FEGLI beneficiaries. In addition, we examined how RAAs are regulated by focusing on the activities and processes OPM and state regulators use to oversee these accounts. With respect to state RAA oversight and to determine what kinds of regulatory and consumer protection requirements states have for insurance companies that offer RAAs, we chose a small number of states as described earlier, some of which have RAA-specific regulation, and others that do not. In addition, we compared FEGLI’s use of RAAs to their use in the private sector and looked for any similarities, differences, and emerging issues. We also looked to the insurance industry for any applicable best practices with respect to RAAs that might be used to improve the FEGLI program. To better understand protections associated with RAAs, we contacted officials from the Federal Deposit Insurance Corporation, state regulators from our sample, and officials from the National Organization of Health and Life Insurance Guaranty Associations and the Center for Economic Justice. We also reviewed information on RAA guidance from the National Conference of Insurance Legislators. We conducted this performance audit from September 2010 to November 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides information on the dollar amount of premiums the FEGLI program has collected from enrolled federal employees and their respective agencies. It also shows the dollar amount of claims the program has paid to beneficiaries of federal employees. In addition, the figure shows the dollar amount of interest income derived from investing FEGLI Fund assets in U.S. Treasury securities. From 2000 to 2010, the dollar amount of premiums collected and claims paid has grown, while the dollar amount of interest income has declined slightly. In addition to the contact named above, Patrick Ward (Assistant Director), Joe Applebaum, Jan Bauer, Emily Chalmers, Marc Molino, Alan Rozzi, Steve Ruszczyk, Mel Thomas, and Frank Todisco made key contributions to this report. | The Federal Employees' Group Life Insurance program (FEGLI), administered by the Office of Personnel Management (OPM), insures over 4 million federal employees and annuitants in the event of an enrollee's death. As a result, it is important that the program is clearly explained and properly overseen. However, some aspects of FEGLI, such as program disclosures and the use of retained asset accounts (RAA)--financial accounts used to settle life insurance claims--have raised questions about the program's operations. GAO was asked to describe and evaluate (1) the FEGLI program's structure and operations, (2) OPM's administration and oversight of the program, and (3) the use of RAAs in FEGLI claims payments. To address these objectives, GAO reviewed FEGLI law and regulations, interviewed OPM, Metropolitan Life Insurance Company (MetLife), and state insurance officials, and met with insurance industry experts. OPM, by directing the funding of the Employees' Life Insurance Fund, has effectively allowed the FEGLI program to assume the risk of loss, while MetLife provides administrative services for the program. FEGLI has some insurance coverage features that most private sector group life plans do not, but a lack of disclosure in certain areas may make it difficult for employees to make fully informed decisions about buying coverage. Generally with private group plans the employer pays the full premium for a set amount of basic coverage, but the statute that created FEGLI requires that enrolled employees contribute two-thirds of the premium for Basic coverage. In addition, FEGLI premiums include the cost of a portion of retirement coverage, a feature generally not found in private sector alternatives, and which can make FEGLI coverage more costly than those alternatives. Further, for Basic coverage, FEGLI premiums are level over employees' working lives, so that early on premiums may be higher than the actual cost of coverage, while later they may be lower. This feature can make FEGLI coverage appear to be more costly than private individual plans for certain employees. However, the materials that FEGLI provides to employees do not disclose either the retirement coverage costs or the level premiums. Employees, particularly those who might leave government service or stop participating in FEGLI before realizing the benefits of these features, may find such disclosures important when deciding whether to purchase the insurance. OPM oversees FEGLI's provision of life insurance, but certain processes for reviewing program benefits and premiums could be improved. OPM administers basic FEGLI functions such as determining and collecting premiums, publishing program regulations, and overseeing the claims payment processes of MetLife, the insurer contracted to provide claims services. Because the program was intended to provide a low-cost benefit to federal employees, OPM has periodically conducted informal comparisons of FEGLI costs and benefits to those of private group life plans. In addition, to better ensure that the program charges appropriate premium rates, OPM actuaries conduct annual reviews and may recommend rate changes. However, OPM does not have documented processes for conducting its comparisons or for documenting any recommended rate changes. The lack of documented processes in both areas creates a risk that FEGLI benefits may not be meeting the needs of federal employees and could be priced at inappropriate rates. From the mid 1990s until early 2011, RAAs were the default settlement option for many FEGLI beneficiaries. While RAAs offer some benefits to FEGLI beneficiaries, OPM does not provide beneficiaries with some important information on RAA operations and protections. According to OPM and some industry officials, RAAs can reduce administrative costs, provide guaranteed interest rates, and allow beneficiaries time to decide how to use settlement funds. But other industry participants and a federal regulator said that beneficiaries might not be fully aware of their settlement options or that RAAs are not insured by the Federal Deposit Insurance Corporation. OPM has recently improved FEGLI disclosures for RAAs, and RAAs are no longer the default settlement option. However, the disclosures still lack information on how the accounts are established and regulated, and how certain protections differ across states. Without this information, beneficiaries may not be able to make fully informed decisions when choosing a settlement option for their FEGLI claims payment. GAO recommends that OPM (1) improve disclosures on important FEGLI features, (2) develop and implement a more structured process for reviewing the FEGLI program and premium rates, and document review outcomes, and (3) improve disclosures on RAA protections and regulation. OPM concurred with these recommendations. |
The IPO process generally consists of three phases: (1) developing the information and documents for submission to SEC, (2) processing these documents through SEC, and (3) marketing and selling the newly public shares. Before the initial sale of their stock is permitted, companies are required to register the IPO with SEC. Companies that want the SEC to declare the IPO effective must first complete a registration statement. The registration statement is to contain basic required information about the offering, such as the name of the company, the number of shares to be publicly traded, and the offer price. The company then submits the registration statement and a preliminary prospectus to SEC. The primary purpose of the prospectus is to inform the investing public of all material information about the company and the security being offered for sale. For this reason, SEC rules require companies to disclose in the prospectus detailed information about the company. Specifically, this detailed information is to include a description of the company’s business and the identity and experience of its management, the risk factors in the company’s operating history and the nature of its business, the names of its current major stockholders, and the company’s financial statements. In addition, the company is required to disclose in the prospectus information on its underwriting firm, including the members of the underwriting syndicate, any relationships between the underwriting firm and the company, and whether the underwriting firm has had less than 3 years of broker-dealer experience. In reviewing the preliminary prospectus, SEC staff are to assess whether the prospectus provides all material information about the issuer, underwriter, and security being offered for sale. SEC has used a working definition, founded on court decisions, that considers information material when there is a substantial likelihood that a reasonable investor would consider it important in determining whether to purchase a security. In their assessment, SEC staff are to use public and nonpublic sources of information to identify areas in the preliminary prospectus that they believe to be incomplete or inaccurate. SEC staff are also to determine whether the financial statements in the preliminary prospectus conform to generally accepted accounting principles. On the basis of its assessment, SEC staff may request that the company revise the preliminary prospectus. When SEC requires no further revisions, the registration process is complete, and IPO securities can be sold to investors. The IPO process is described more fully in appendix I. Underwriters play an important role throughout the IPO process. Companies typically use underwriters, along with lawyers and accountants, to assist them in registering the IPO with SEC. Companies also rely on underwriters to help market and sell the IPO to the investment community. Underwriters can sell IPO shares by either serving as an agent for the company or as owner of the shares. As an agent, the underwriter assumes no financial risk for the sale of the IPO shares since the company retains ownership of the shares. Alternatively, the underwriter can purchase some or all of the newly issued shares to resell to other investors at a maximum price known as the “offer price.” In this case, the underwriter, as the new owner of the IPO shares, assumes financial risk for the issuance of the IPO shares. Underwriters also can be involved in market stabilization of the price of the security during the sales period and the period following the cessation of sales efforts in the offering. While underwriters play an important role in a smoothly functioning IPO process, underwriters could adversely affect an investor’s investment risk through certain activities, such as fraud and market manipulation, which are illegal, and favoritism, which is not. For example, an underwriter can profit from engaging in prohibited practices, such as “free-riding” or “withholding.” An underwriter engaging in free-riding purchases securities with the intent of not paying for them or with the intent of paying for them only if the price goes up by the settlement date. The underwriter can then sell the securities at a price higher than the purchase price and the sales proceeds can be used to cover the purchase obligation. An underwriter engaging in withholding can profit directly or indirectly from a price rise on the sale of IPO shares by withholding a certain number of shares from the market until the market price rises above the offer price. Withholding a sufficiently large number of shares could cause the price to rise quickly in the period following the cessation of the sales offering. Furthermore, underwriters could give certain investors an economic advantage by favoring those investors over others in sales of IPO shares at the initial offer price. Many academic studies have found that investors can profit from buying IPO shares at the offer price. According to a March 17, 1994, study by Prudential Securities, a significant difference exists between the performance achieved by investors able to buy at the offer price and the performance achieved by investors who buy after the first day of trading. Both SEC and the National Association of Securities Dealers (NASD), the self-regulatory body for broker-dealers, conduct periodic examinations of broker-dealers’ underwriting firms to detect rule violations. When violations are found, the regulators can impose a variety of disciplinary actions. For example, minor violations may result in a letter of caution to the violator; more serious violations can warrant a formal disciplinary action, such as a suspension and/or a monetary fine. Individual brokers and their firms are required to report any formal disciplinary actions taken against them to the Central Registration Depository (CRD). These disciplinary actions for violations related to the securities business may be imposed by SEC, state regulators, self-regulatory organizations (SRO), the courts, or employing firms. To obtain information on the factors that influence underwriting firms’ allocation of IPO shares between institutional and individual investors, we interviewed SEC officials responsible for market regulation to determine what rules, if any, govern the allocation of IPO shares. We also reviewed NASD rules to identify those affecting the IPO allocations process. We also randomly sampled 50 of the 952 IPOs that SEC processed from January 1, 1993, to June 30, 1994. For each of these offerings, we identified the underwriting firm from an SEC listing of IPOs. Thirty-four underwriting firms were associated with these 50 IPOs. Of these 34 firms, we judgmentally selected 10 firms to obtain a better understanding of the IPO process and the factors that influence the distribution of IPO shares. Of the 10 firms selected, 6 were large underwriting firms that were each responsible for over $1 billion in IPOs, and 4 were smaller underwriting firms that were each responsible for under $1 billion in IPOs between January 1993 and June 1994. To obtain a perspective on underwriting practices from other than New York firms, 2 of the 10 firms were located outside of New York. One of the two underwriting firms was located in Atlanta, and the other was located in Baltimore. For each of the underwriting firms, we interviewed senior officials responsible for selling IPO shares to investors. We visited officials at the New York and Baltimore underwriting firms; however, we interviewed officials at the Atlanta firm over the telephone. During these interviews, we discussed the IPO allocation process, the pricing of IPOs, and the disclosure of information about underwriters’ disciplinary histories. To determine disclosure requirements for underwriting firms’ disciplinary histories, we first identified existing disclosure requirements that could pertain to underwriting firms. We discussed these requirements with SEC officials responsible for processing IPO registrations in Washington, D.C., and New York. We also discussed these requirements with officials from the underwriting firms during our conversations on the IPO allocation process. We obtained the disciplinary histories of the 34 underwriting firms in our sample from the CRD. Specifically, we obtained information on (1) formal disciplinary actions taken against the 34 underwriting firms for the 5-year period preceding the issuance of the IPO and (2) specific securities violation(s) that gave rise to such actions. We obtained and reviewed the prospectus for each of the 50 IPOs in our sample to determine what types of information were disclosed. In reviewing the prospectus, we determined that information concerning the underwriter’s disciplinary history was not disclosed. We discussed the disciplinary actions and related violations with SEC officials and the 10 underwriting firms’ officials to obtain their views about the benefits from and any concerns about disclosure of such information in the prospectus. Our work was performed in New York, Baltimore, and Washington, D.C., between May 1994 and August 1995 in accordance with generally accepted government auditing standards. We obtained written comments from SEC on a draft of this report, and we have reprinted their letter in appendix II. SEC’s comments are summarized and evaluated at the end of this report. In marketing and selling IPO shares, underwriting firms primarily target institutional investors rather than individual investors. According to officials we interviewed at 10 underwriting firms, 9 of the firms sold the largest portion of IPO shares to institutional investors, such as pension funds, mutual funds, and money managers. Estimates of sales to institutional investors by officials of the nine underwriting firms ranged from 60 to 90 percent—at a large underwriting firm with both institutional and individual clients and at a small underwriting firm with few individual clients, respectively. An official at the 10th underwriting firm said his firm sold primarily to individual investors. This underwriting firm was unlike the other nine in that it had few institutional clients. Underwriters said they allocated their IPO shares predominantly to institutional investors because of economic factors and their business judgment that institutional investors are better suited for IPOs than individual investors. According to the underwriters we interviewed, they preferred to allocate IPO shares to institutional investors because they believed that these investors are better able than individual investors to buy large blocks of IPO shares, assume financial risk, and hold the investment for the long term. Officials at nine underwriting firms said they sold largely to institutional investors because these investors had the financial resources to purchase large blocks of stock. This practice was important to these officials because they were concerned that unsold shares could be a source of financial losses should the market price fall below the offer price. Furthermore, these officials believed that IPOs are more suitable for large institutional investors than for individual investors because institutional investors are more able to assume the risk of declining share values than individual investors who may not have the financial resources to hold shares in an IPO when share values decline. Some underwriting firm officials expressed concern that individual investors may be more likely to take a quick profit by selling their stock within the first days or weeks of the offering when the price of the IPO shares may be at its highest. Underwriting firm officials cited the following factors as affecting their decision to allocate IPO shares to individual investors. The underwriter has a high percentage of individual clients. Underwriting firms with a high percentage of individual investor clients were more likely to allocate a portion of the IPO shares to these investors. An official at one underwriting firm we interviewed estimated that his firm sold 80 percent of its IPO shares primarily to individual investors because its client base primarily included individual investors. Officials of another underwriting firm told us that even if there were sufficient demand to sell an entire IPO to institutional investors, it was company practice to allocate a portion to its individual investor clients. The officials explained that they had adopted this practice to satisfy the demands of their individual investors. Investor recognition of company and industry may stimulate interest in an IPO. Pressure from individual investors can cause underwriters to allocate IPO shares to these investors. Officials at all of the underwriting firms with whom we spoke told us that they expect greater individual investor interest in an IPO when the company or the company’s product or industry is widely recognized and little individual investor interest when the company is not well known. For example, at one underwriting firm an official told us that the IPO of a popular retail gourmet coffee establishment generated significant individual investor demand. Although the entire IPO could have been sold to institutional investors, the underwriting firm designated a portion of the offering to individual investors to maintain their goodwill and ensure they remain as clients. The same official told us of another IPO issued by a company with limited individual investor recognition. This IPO involved a freight consolidator company that was known only to a small number of institutional investors. Rather than educate individual investors and improve their knowledge of the company, the underwriting firm sold the entire IPO to the few institutional investors. There is insufficient demand for the IPO from institutional investors. Underwriters may target individual investors when there is insufficient institutional demand for the IPO. An official of an underwriting firm, who usually sold IPOs exclusively to institutional investors, told us of a situation that forced the firm to sell shares to individual investors. In marketing the IPO, officials at the underwriting firm determined that institutional investor interest was insufficient to sell the entire issue. (According to the officials, the issue’s profit potential was too low.) To locate purchasers for the remaining shares, the underwriting firm extended its marketing efforts to individual investors. These efforts were successful and enabled the underwriting firm to sell the entire IPO to a combination of institutional and individual investors. Under existing SEC and NASD rules, underwriters generally have wide latitude in deciding how best to market and allocate IPO shares. Except for rules governing fraud and manipulation of securities offerings, SEC rules do not address the allocation of IPO shares. NASD has an interpretive rule prohibiting free-riding, withholding, and sales to certain insiders under certain market conditions. Officials from underwriting firms told us that they discouraged these practices, but enforcing such policies, especially among syndication partners, can be difficult. According to an SEC official, SEC has received a number of complaints from individual investors about their lack of access to the IPO market. In response to these complaints and press articles about sales of IPO shares to insiders by underwriters, SEC conducted a limited study of the IPO allocation process in 1994. The purpose of the study, according to SEC officials, was to study whether firms had a reasonable basis for allocating shares. SEC officials told us they interviewed underwriters as part of their study and discussed the allocation process. On the basis of these interviews, SEC officials observed that underwriters’ allocation practices generally reflected the companies’ clientele; therefore, if the company dealt primarily with institutional investors, most of its IPOs were generally made available to institutional investors. However, if underwriters had a substantial retail client base, they were more likely to make IPOs available to individual investors. Thus, in the SEC’s officials’ views, these practices appeared to be based on reasonable business judgment. SEC officials also observed that institutional investors have the financial resources to buy more shares and handle more risk, which is important because IPOs involve companies with no previous history as publicly traded firms. In addition, institutional investors are often better able to hold investments for the long term. Furthermore, the underwriters SEC interviewed pointed out that traditional distinctions between individual investors and institutional investors have become blurred in today’s market environment. Individual investors have invested substantial amounts in institutional investment entities, such as pension and mutual funds, and can gain access indirectly to the IPO market by investing money in these entities. According to an SEC official, SEC has chosen, thus far, not to address the IPO allocation process in rulemaking. Among the reasons SEC cited for not addressing this issue were the complexity of the issues involved and the difficulty of crafting rules that would be reasonable and enforceable. Securities regulation is based on the concept of full and fair disclosure. The assumption is that investors will be able to make a more rational and informed evaluation of the relative risk and reward of a particular investment if they have free and equal access to information about that investment. Rules under the Securities Act of 1933 and the Securities Exchange Act of 1934 require the issuer and the underwriter to take reasonable steps to make a preliminary prospectus available to investors who have expressed an interest in purchasing the security. The prospectus is to contain material information about the company issuing the security. The prospectus is also to contain material information concerning the offering and firms that participate in the offering. SEC rules specifically require a company registering an IPO to report, for the 5 years preceding the issuance of the IPO, information on the criminal and disciplinary histories of its officers and directors that is material to an evaluation of the individuals’ ability and integrity. However, SEC rules currently do not specifically require that companies report similar information about firms underwriting the offering, even though underwriters have important roles throughout the IPO process and could affect an investor’s investment risk by engaging in prohibited activities, such as manipulating the price of IPO shares. SEC has used its more general authority under the Securities Act of 1933 to require additional disclosure from certain underwriters who had Commission enforcement proceedings against them. These proceedings generally involved cases in which the companies were either in financial trouble or had been involved in pervasive fraud. Item 401 of SEC Regulation S-K provides companies specific guidance on what information the prospectus must contain about the criminal and disciplinary histories of the companies’ officers and directors. Information that companies are to report includes bankruptcy filings, criminal convictions, pending criminal actions, civil judgments, and SEC disciplinary actions. While federal securities laws generally require companies issuing securities to disclose all material information about the underwriter participating in the issuance, Regulation S-K does not provide any specific guidance on what aspects of an underwriter’s disciplinary history are material. The disclosure in the prospectus of information about an underwriter’s disciplinary history could help investors more fully assess the underwriter’s ability and integrity as well as the riskiness of investing in the IPO. Our search of the CRD found that 13 of the 34 underwriting firms we sampled had 25 formal disciplinary actions, collectively, that related to past underwriting activities for the 5-year period before the issuance of the IPO. None of these actions was disclosed in the prospectuses we reviewed. Of the 13 underwriting firms, 1 had 5 violations, 2 had 4 violations, 2 had 2 violations, and 8 had 1 violation. A frequent violation was of the NASD rule that prohibits underwriting firms from withholding IPO shares from public distribution if the market price rises above the offer price. This rule was designed to prevent underwriting firms, and others associated with the offering, from directly or indirectly profiting from the price rise. Four of the 13 underwriting firms were cited for violating this rule. Another frequent violation concerned underwriting firms that improperly overstated the orders for and the sales of new debt issues of government-sponsored enterprises. By manipulating these statistics, the underwriting firms attempted to maintain or increase their share of future offerings. Nine of the 13 underwriting firms were among the 98 brokers that SEC, jointly with other federal regulatory organizations, fined in 1992 for violating these rules. Other violations in our sample for which formal actions were taken included the following: Three underwriting firms were cited for failing to finalize trades with other syndicate members within specified time periods. Two underwriting firms were cited for performing an inadequate search of the company’s finances and business activities. One underwriting firm attempted to improperly influence the pricing of an impending public offering. One underwriting firm was cited for selling unregistered securities. SEC officials and officials at all of the firms with whom we met agreed that the prospectus should disclose all of the information investors need to assess the risk of the offering. However, the officials did not believe that all information about an underwriter’s disciplinary history should be disclosed in the prospectus. Instead, they believed that if there were to be requirements on disclosing information about the underwriter, that information should be material to an investor’s decision on investing in the IPO and to an assessment of risk. While agreeing that investors have a right to know about an underwriter’s disciplinary history, officials associated with two underwriting firms expressed reservations about disclosing such information in the prospectus. Officials at the two firms said that SEC’s Broker-Dealer Form already requires the reporting of extensive information about an underwriter’s criminal and disciplinary history and that this information is available to the public. An official at another underwriting firm suggested that the prospectus, instead of disclosing information on an underwriter’s disciplinary history, should inform investors that they could obtain this information by contacting NASD or state securities regulators. To help investors more fully assess their IPO investment risk, we believe it is important for companies to disclose in the prospectus material information on the criminal and disciplinary histories of their underwriter. A requirement that companies disclose information on underwriters is similar to information already required to be reported on officers and directors of the issuing companies and should not be a difficult or costly task. Underwriting firms should have ready access to detailed knowledge of all formal disciplinary actions that regulatory organizations have imposed against them. In addition, they could access the CRD through on-site computer terminals or telephone NASD to ensure the completeness of their information. We believe the suggestion that SEC’s Broker-Dealer Form could serve as the disclosure vehicle to investors is not the preferred option, because the information reported on the Broker-Dealer Form is not as readily accessible to investors as the prospectus. The other suggestion was to use the prospectus as a vehicle to inform investors about the availability of information from NASD or state regulators. While this could serve as an additional source of information to investors, adding this information to the prospectus would mean that investors would have to contact NASD, request information about the underwriter, and scan the information to determine which violations and disciplinary actions are material to their investment decision. In some cases, the information about the underwriter could be quite lengthy and difficult to interpret. Differences of opinion exist as to the types of disciplinary actions and violations that are material to an investor’s decisionmaking on the IPO. While the violations we identified were serious enough to warrant reporting to the CRD, some may not have been relevant and others may not have been serious enough to be considered material to an investor’s decisionmaking about an IPO. In the absence of specific guidance, many underwriting firms may conclude that their disciplinary history does not warrant disclosure to investors. For example, officials at the 10 underwriting firms we interviewed believed that some of the actions we identified were not serious enough to be considered material and ought not be reported. SEC could provide guidance clarifying what information relating to an underwriting firm’s disciplinary history is material and, therefore, required to be reported. Investors require material information on an IPO to make an informed investment decision. SEC rules require companies to disclose in the prospectus material information on their businesses, finances, operations, and officers and directors. SEC provides specific guidance on what information about the criminal and disciplinary histories of a company’s officers and directors must be disclosed. In contrast, SEC does not specifically require disclosure of material information about the underwriter’s disciplinary history in the prospectus. Because of the important role underwriters play in the IPO process, material information about an underwriter’s disciplinary history would be useful to investors. Having certain information, including formal disciplinary actions taken by SEC, state regulators, and SROs for securities violations arising from past underwriting activities, would enable investors to use these factors in their investment decisions and allow them to better assess the risks of the IPO. In the absence of a specific disclosure requirement, investors may not receive information that may be critical to their investment decisions. To improve disclosure to investors who purchase IPOs, we recommend that the SEC Chairman amend SEC Regulation S-K and IPO registration forms to require that companies disclose in the prospectus information about the underwriter’s disciplinary history that is material to assessing the risk of an IPO investment, and provide guidance on the type of information that is material. SEC should also incorporate a statement in the prospectus that tells investors how to obtain additional information from NASD on the underwriter’s disciplinary history. SEC staff provided written comments on a draft of this report, and these comments are included in appendix II. SEC agreed that investors need adequate information to make an informed investment decision and that among the necessary items of disclosure would be information relating to any material disciplinary actions taken against the principal underwriters. However, SEC does not believe there is a need for a specific requirement for material disclosures, similar to that involving directors and officers of the offering company. SEC believes more detailed information is necessary for company officers and directors because, unlike the underwriter, they have an ongoing role to play with the offering company. SEC also believes it provides for sufficient disclosure requirements for those underwriters with a history of disciplinary problems through its more general authority under the Securities Act of 1933. SEC officials showed us recent examples of prospectuses with extensive disclosures SEC had required from underwriters who had Commission enforcement proceedings against them. The staff did agree that, for those investors who desired more information, it may be appropriate to recommend a rule requiring prominent disclosure in prospectuses on how to obtain information from the CRD. The prospectuses with extensive disclosure SEC staff provided us generally involved cases in which the companies were either in deep financial trouble or had been involved in pervasive fraud. In those cases, SEC’s actions to require additional disclosure were probably appropriate. However, our concern is that this disclosure threshold may be too high. Many of the cases we cite in our report did not involve allegations of fraud, but we believe there are investors who would find the disciplinary information pertinent in making an informed investment decision. Thus, we still believe there should be an affirmative disclosure requirement for underwriters with disciplinary histories and that SEC should provide guidance on this disclosure. For the reasons cited by SEC, the disclosure requirements probably do not have to be as elaborate as those for officers and directors, so we modified our recommendation accordingly. As you know, the head of a federal agency is required by 31 U.S.C. 720 to submit a written statement of actions taken on these recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of this letter and to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this letter. We will provide copies of this report to the Senate Committee on Banking, Housing and Urban Affairs and its Subcommittee on Securities; the House Committee on Commerce and its Subcommittee on Telecommunications and Finance; and other interested parties and Members of Congress. Copies will be available to others upon request. This report was prepared under the direction of Helen H. Hsing, formerly Associate Director, Financial Institutions and Markets Issues. Other major contributors are listed in appendix III. If you have any questions about this report, please contact me on (202) 512-8678. The initial public offering (IPO) process consists of three phases: (1) developing the information and documents for submission to the Securities and Exchange Commission (SEC), (2) processing these documents through SEC, and (3) marketing and selling the newly public shares. Various alternative requirements apply under certain conditions; for example, different requirement apply for companies that meet SEC small business criteria. Corporations may have several motivations for offering their securities to the general public. For example, they may view the IPO process as a means to raise capital for expansion or special projects or to replace debt with equity. Another motivation may stem from the desire of existing shareholders to sell their holdings to the general investment community. Corporations usually use an underwriting firm to assist in preparing the documents and getting the IPO statement declared effective. Underwriting firms may also assume some of the financial risks involved in selling the IPO. For example, in what is known as “firm commitment offerings,” the underwriting firm agrees to purchase all of the IPO shares from the company. Purchasing all of the IPO shares subjects the underwriting firm to the risk that it may be unable to sell some or all of these shares. If the underwriting firm believes that it will be difficult to sell the new issue, it can reduce its risks by agreeing to a “best efforts offering.” Under the best efforts offering, the underwriting firm does not commit itself to the purchase of the entire offering. In addition to the underwriting firm, lawyers and certified public accountants assist in preparing sections of the registration statement and prospectus. Each of these parties has special responsibilities for ensuring the accuracy and completeness of these documents. The registration statement contains basic information about the offering, such as the name of the company, the number of shares to be publicly offered, and the offering price. The prospectus contains detailed information about the company, including a description of its business, the identity and experience of its management, the factors in the company’s operating history and the nature of its business, and the current major stockholders. The prospectus also contains the company’s financial statements. After their completion, the prospectus and registration statement are submitted to SEC for review. SEC neither approves or disapproves the securities, nor does it verify the accuracy or adequacy of the information in these documents. However, SEC does identify areas for amplification, clarification, or supplementation on the basis of information from the prospectus, newspapers, and periodicals and on the basis of SEC staff knowledge of accounting rules and practices, industry trends, and regulatory requirements. SEC asks the company to respond to each of its areas of concern. After addressing these comments and making any appropriate revisions, the company resubmits the prospectus and registration statement to SEC. At this point, SEC may have a second set of comments that may require a second revision of the documents. This submission, review, and revision process is repeated until SEC has no further comments. The first version of the prospectus contains the approximate number of shares to be publicly offered. This version also contains the range of possible offering prices. The final prospectus, with the final offering price, is completed either the day before or the day of the start of public trading. When SEC no longer has any comments on the registration statement and prospectus, it notifies the company of the effective date of the offering. On the effective date, the underwriting firm purchases the shares from the company and resells the shares to institutional and individual investors at the offering price. After the effective date, the investors are free to sell the shares at the market-determined price. Underwriting firms use a variety of techniques to help them set the offer price. For example, they compare new companies’ financial history and prospects to those of similar companies whose stock is already publicly traded. Underwriting firms also meet with investors to assess the extent of their interest in the offering. These meetings, often called “road shows,” also give investors the opportunity to question management about the company’s finances, products, and operations. The underwriting firms frequently set the offer price at a level somewhat below their estimate of the market price. The variance is intended to provide investors with an incentive for purchasing the IPO shares. Officials at underwriting firms told us that this variance may range from near 0 percent, when they expect the IPO to have high investor interest, to 25 percent, when less interest is expected. Although federal law prohibits any sales of securities before the effective date, investors may furnish underwriting firms with “expressions of interest” in the offering. On the effective date, investors are asked to convert their expressions of interest into commitments to purchase. Investors who purchase securities may resell their securities after the registration is effective. Bernard D. Rashes, Assistant Director Gary Roemer, Evaluator-in-Charge Philip F. Merryman, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO addressed concerns about the initial public offerings (IPO) allocation process, focusing on: (1) the factors that influence underwriters to sell IPO shares to institutional investors and individual investors; (2) disclosure requirements concerning the history of disciplinary actions taken against an underwriter; and (3) Securities and Exchange Commission (SEC) rules governing the IPO market. GAO found that: (1) most underwriters primarily sell IPO to institutional investors because of economic factors and their belief that these investors can buy larger blocks of IPO shares, hold their investments longer, and assume greater financial risk; (2) underwriters market IPO to individual investors when their firms have a high percentage of individual investor clients, the company is well known to individual investors, or institutional investors have little interest in the IPO; (3) SEC and the National Association of Securities Dealers (NASD) give underwriters wide latitude in marketing and allocating IPO shares among institutional and individual investors; (4) SEC does not require companies to disclose in their prospectus certain information about their underwriters' criminal or disciplinary histories except when market manipulation and fraud are involved; (5) SEC and self-regulatory organizations imposed 25 formal disciplinary actions for securities violations on 13 of 34 underwriting firms for the 5-year period prior to issuance of IPO; and (6) SEC could provide investors a better means of assessing risks associated with IPO if it required companies to disclose material information on underwriters' disciplinary histories in their prospectus. |
To determine the extent to which the structure of the Promise Neighborhoods program aligns with program goals and how Education selected grantees, we reviewed relevant Federal Register notices, application guidance, and agency information on applicants for fiscal year 2011 and 2012 implementation grants. To determine how Education aligns Promise grant activities with other federal programs, we reviewed documentation on Education’s alignment efforts. To assess Education’s approach to evaluating the program, we reviewed its grant monitoring reports, performance measures, and guidance for data collection. To determine the extent to which Promise grants enabled collaboration at the local level, we used GAO’s prior work on enhancing collaboration in interagency groups as criteria. We compared the Promise grants’ collaboration approaches to certain successful approaches used by select interagency groups and reviewed implementation grantees’ application materials. To learn about grantees’ experiences with the program, we conducted a web-based survey of all planning and implementation grantees nationwide from late August to early November 2013. We received responses from all 48 grantees. We asked grantees to provide information on the application and peer review process, coordination of federal resources, collaboration with local organizations, and results of the planning grants. Because not all respondents answered every question, the number of grantees responding to any particular question will be noted throughout the report. In addition, we conducted site visits to 11 planning and implementation grantees. During these visits, we interviewed five planning grantees and six implementation grantees. Sites were selected based on several factors, such as the type of grant awarded, the location of grantees, and whether they were urban or rural. For all four objectives, we interviewed Education officials, technical assistance providers, and subject matter specialists from the Promise Neighborhoods Institute.(See appendix I for more detail on the scope and methodology.) We conducted this performance audit from February 2013 to May 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Promise Neighborhoods program is a place-based program that attempts to address the problems of children and youth in a designated geographic footprint. The program is designed to identify and address the needs of children in low-performing schools in areas of concentrated poverty by aligning a cradle-to-career continuum of services. The program moves beyond a focus on low-performing schools by recognizing the role an entire community plays in a child’s education (see fig. 1). Place-based initiatives provide communities the flexibility to address their unique needs and interrelated problems by taking into account the unique circumstances, challenges, and resources in that particular geographic area. The Promise program is one of several place-based initiatives at the federal level, but it is the only one focused on educational issues. In addition to Education, the Departments of Justice (Justice), Housing and Urban Development (HUD), and Health and Human Services (HHS) also have grant programs aimed at impoverished neighborhoods. Together, these four agencies and their grant programs form the core of the White House Neighborhood Revitalization Initiative. This initiative coordinates neighborhood grant programs at the federal level across agencies, and identifies and shares best practices. Each agency’s grant program focuses on its respective agency’s core mission, but together, they focus on key components of neighborhood revitalization, education, housing, crime prevention, and healthcare. Generally, the purpose of the Promise grants is to fund individual grantees’ efforts to plan for and create a cradle-to-career pipeline of services based on the specific needs of their communities. The grants are focused on improving student outcomes on 15 performance indicators, chosen by Education. Along with the grantee, partner organizations, funded by federal, state, local, private, or nonprofit organizations, are expected to collaborate to provide matching funds and services. A number of nonprofits and foundations have worked on initiatives to address complex problems in a similarly comprehensive way. Their approach brings together a group of stakeholders from different sectors to collaborate on a common agenda, align their efforts, and use common measures of success. This approach has been described as the collective impact model. The premise of the model is that better cross-sector alignment and collaboration creates change more effectively than isolated interventions by individual organizations. A number of organizations have used this approach to address issues such as childhood obesity and water pollution. Several other cradle-to-career place-based collective impact programs share key characteristics with the Promise program, including Cincinnati’s Strive program and the Harlem Children’s Zone. These collective impact initiatives use a centralized infrastructure and a structured process, including training, tools, and resources, intended to result in a common agenda, shared measurement, and mutually-reinforcing activities among all participants. This centralized infrastructure requires staff to manage technology, communications support, data collection, reporting, and administrative details. The Promise grantees’ role is to create and provide this centralized infrastructure for their communities. The Promise program relies on a two-phase strategy for awarding grants, which includes both one-year planning grants and three- to five-year implementation grants. (See table 1.) Among other things, planning grantees are required to conduct a comprehensive needs assessment of children and youth in the neighborhood and develop a plan to deliver a continuum of solutions with the potential to achieve results. This effort involves building community support for and involvement in developing the plan. Planning grantees are also expected to establish effective partnerships with organizations for purposes such as providing solutions along the continuum and obtaining resources to sustain and scale up the activities that work. Finally, planning grantees are required to plan to build, adapt, or expand a longitudinal data system to provide information and use data for learning, continuous improvement, and accountability. The implementation grant provides funds to develop the administrative capacity to implement the planned continuum of services. Education expects implementation grantees to build and strengthen the partnerships they developed to provide and sustain services and to continue to build their longitudinal data systems. Education awarded most of the 2010-2012 grants to non-profit organizations (38 of 48), eight to institutions of higher education, and two to tribal organizations. Almost all (10 of 12) implementation grantees received planning grants, while two did not. (See fig. 2 for locations of grantees.) (See appendix II for a list of grantees and year of grant award.) The planning and implementation grant activities that Education developed for the Promise program generally align with Education’s goal of significantly improving the educational and developmental outcomes of children and youth in the nation’s most distressed communities. According to Education officials, the planning grant award process enabled them to identify community-based organizations in distressed neighborhoods with the potential to effectively coordinate the continuum of services for students living in the neighborhood. The eligibility requirements, which included matching funds or in-kind donations and an established relationship with the community to be served, helped to ensure that grantees had financial and organizational capacity and were representative of the area to be served. Education developed criteria to evaluate applications and select grantees based on the grantees’ ability to describe the need for the project; the quality of the project design, including the ability to leverage existing resources; the quality of the project services; and the quality of the management plan. Education’s Promise planning grants were intended to enhance the capacity of identified organizations to create the cradle-to-career continuum. The activities required of planning grantees enable grantees and their partners to gain a depth of knowledge about their communities and the communities’ needs, which can increase their capacity to focus on improving educational and developmental outcomes for children and youth throughout their neighborhood. Through a separate competition, Education identified organizations that application reviewers determined were most ready to implement their plans. While acknowledging that the implementation grantees are best positioned to determine the allocation of grant funds, Education expects that grant funds will be used to develop the administrative capacity to implement the planned continuum and that the majority of resources to provide services to students and families will come from other public and private funding sources rather than from the grant itself. This expectation gives the Promise strategies a chance to extend beyond the 5-year life of the grant. Further, the requirement that grantees build a longitudinal data set allows Promise grantees and their partners to review and analyze robust data in real time to make informed decisions about whether to adjust their strategies. The data can also help the grantees and Education learn about the impact of the program. Education identified 10 desired results from implementation of the program, which cover the cradle-to-career age span that Promise Neighborhoods are expected to address. A technical assistance provider stated that the list of desired results help grantees focus on improving educational and developmental outcomes across the entire continuum. (See table 2.) (The indicators that measure progress toward achieving results are listed in Appendix III.) Education’s grantee selection process was generally clear and transparent. However, Education did not communicate clearly to planning grantees about the probability of receiving an implementation grant and its expectations for grantees to continue their efforts without implementation funding. This lack of clarity created challenges for some grantees. Education outlined its selection criteria and how grant applications would be scored in its grant announcements and selected peer reviewers from outside the organization. According to Education officials, the peer reviewers had expertise in various related fields, including community development and all levels of education. Education provided additional training on the application review process. For the planning grant selection, Education divided about 100 peer reviewers into panels of three to review packages of about 10 applications. Afterward, peer reviewers conferred about scores in a conference call. For the first implementation grant selection, Education had a two-tiered peer review process. During the first tier, peer reviewers were divided into panels of three to review approximately seven applications. During the second tier review of the 16 highest scoring applications, panels of reviewers were adjusted so that different reviewers read and scored different applications. For the second implementation grant selection, there was only one round of reviews. Reviewers were asked to review the applications and submit comments before meeting on-site to discuss applications. Education posted the results online, including peer reviewer comments for grantees and a list of applicants with scores above 80 out of 100 points. In our web-based survey of grantees, grantees had mixed views on the clarity of application requirements and the helpfulness of peer reviewer comments. Specifically, 13 of 18 planning grantees who applied unsuccessfully for implementation grants and responded to the relevant survey question said the application requirements were very clear or extremely clear, while 8 of 19 grantees that responded said the same about peer reviewer scores and comments (see fig. 3). The unsuccessful applicants gave somewhat lower marks to the helpfulness of peer reviewer comments in improving their future applications and strengthening their current strategies (see fig. 4). Some of the 11 planning and implementation grantees that we interviewed raised concerns about specific application guidelines, such as how the term “neighborhood” is defined and the length of the application. Specifically, two rural grantees said that the grant application and materials had a few areas that seemed to be more geared to urban or suburban grantees. For example, the term “neighborhood” was somewhat difficult for them to interpret in a rural context. In fact, two rural grantees included multiple towns or counties in their neighborhood footprints. Additionally, two grantees we spoke with had concerns about the implementation grant application’s 50-page recommended maximum for the project narrative. Both organizations limited their narratives to 50 pages, but said they later learned that most of the successful grant recipients had exceeded this limit, often by a large amount. The timing of the grant cycles created either an overlap or a long gap between the two grants. Grantees who applied for the implementation grant in the first cycle after receiving a planning grant had an overlap between executing the first grant and applying for the second grant. According to Education officials these grantees were unable to fully apply the knowledge gained in the planning year to develop their implementation applications. For example, one grantee said having to apply for the implementation grant during the planning year made it difficult to create opportunities for community input into the planning process. On the other hand, one of the four grantees that received an implementation grant 2 years after receiving a planning grant faced challenges sustaining the momentum of its efforts without additional funding. Another grantee in the same situation was able to sustain momentum with a separate grant from a private foundation. Education officials said they became aware of the problems with the timing of the implementation applications a few months into the first planning grant year. However, they said they did not have much flexibility in timing the grant cycles. For example, they said that they needed to allow time for public comment on the grant notification in the Federal Register. In addition, they said that agency budget decisions were delayed that year because the Department was operating under a continuing resolution for over 6 months in fiscal year 2011—the first year implementation grants were awarded. Some grantees also said there was a disconnect between the planning and the implementation grant application processes. Specifically, two officials from the six implementation grantees we visited told us that a high-quality planning year was not nearly as important for obtaining an implementation grant as having someone who could write a high-quality federal grant application. For example, one grantee noted that writing a good implementation grant application was not heavily dependent on information gleaned from the planning process. Another grantee said that the implementation grant application was written by a completely different person who was not involved in planning grant activities. Some grantees who received only planning grants reported in our survey and in interviews that they experienced challenges continuing their work without implementation funds. In addition, two of the five planning grantees we interviewed had concerns with Education’s strategy of awarding few implementation grants compared with the number of planning grants. Education informed grantees there was a possibility they would not receive an implementation grant following the planning grant, but no information was provided about the likelihood of whether this would occur. We found indications that grantees did not fully appreciate that receiving a planning grant would not necessarily result in receiving an implementation grant. Three of the five planning grantees we interviewed stated that they did not have contingency plans for continuing their Promise Neighborhood efforts in the event that they did not receive implementation funding. The lack of contingency planning raises questions about the grantees’ understanding of the probability of receiving an implementation grant. Internal control standards state that management should ensure that effective external communications occur with groups that can have a serious impact on programs, projects, operations, and other activities, including budgeting and financing. To date, Education has awarded 46 planning grants (21, 15, and 10 in 2010, 2011, and 2012, respectively) and 12 implementation grants. Even though all but two implementation grants were awarded to planning grantees, fewer than one-quarter of planning grantees received implementation funding. (See table 3.) Education officials provided several reasons for separating the planning and implementation grants and for not awarding implementation grants to all planning grantees who applied. Officials said that when they awarded the first planning grants, they were not sure which neighborhoods had potential grantees with the capacity to implement a Promise plan. In their view, the planning grants allowed them to invest in the capacity of communities to take on this work, while the implementation grants were only awarded to those that demonstrated they were ready for implementation. Education officials said it was important that grantees demonstrate they have an implementation plan in place before receiving such a large sum of money. In addition, after the first round of implementation grants were awarded, they noted that some applicants did not receive implementation grants because they were not yet competitive—in part because they had applied for the implementation grants before their planning efforts were complete. Finally, in commenting on a draft of this report, Education officials said that in several years, Congress appropriated less funds than were requested, which, they said, affected the number of implementation grants Education awarded. In 2010, both Education’s Federal Register Notice Inviting Applications for planning grants and a related frequently asked questions document informed organizations receiving planning grants that they should not necessarily plan on automatically receiving implementation grants. The frequently asked questions guidance noted that the two types of grants could stand alone. For example, an applicant could receive just a planning grant, consecutive planning and implementation grants, or—if the applicant was further along in the planning process—just an implementation grant. Education officials told us that they viewed the planning grant activities as useful in themselves. For example, they told us that the planning process offers rich data and begins the process of bringing together partners and breaking down silos. They expected that planning grantees that applied for but did not receive implementation funding could continue their efforts without implementation grant funding, using their partners’ pledged matching funds to implement their plans on a smaller scale. They noted that the requirement to develop memoranda of understanding with partners should have signaled that the obligations of the partner organizations were not to be contingent upon receipt of an implementation grant. However, Education did not require grantees to have matching funds in-hand before submitting their applications. Especially in light of the difficult fiscal climate that federal agencies will likely continue to face in the future, we believe that it is important for Education to clearly communicate to grantees regarding expectations for planning and implementation grants. Clear communication and expectations can also help promote more realistic expectations among grantees about future funding opportunities given the fiscal realities of the Promise program over the past 5 years. Grantees who had not received implementation grants were trying to continue their efforts and most reported significant challenges in sustaining momentum. According to our survey, since the end of the planning grant, most planning grantees who did not receive an implementation grant (17 out of 29 that answered the related question) found it very or extremely challenging to maintain funding, 12 out of 29 planning grantees felt that maintaining key leadership positions was very or extremely challenging, and 13 out of 29 planning grantees found that hiring staff was very or extremely challenging. Four of the five planning grantees we interviewed who had not received implementation grants told us that they need to determine how to implement scaled-down versions of programs and services identified in their implementation grant applications. They described challenges continuing their work without implementation funding. For example, three grantees noted that partners had pledged funding as a match for federal dollars in their implementation grant proposal. Without the leverage of implementation grant funds, it was difficult to maintain the proposed funding streams. All of the five grantees we interviewed that had received only planning grants said the planning process was very helpful in building connections and trust and deepening communication among partners, and between partners and the community. Four grantees were concerned, however, that the trust and momentum they had built might dissipate if they were not able to carry out their plans without an implementation grant. In an effort to target its resources and align the Promise program goals with those of other place-based initiatives, the Promise program coordinates closely with a limited number of federal programs within Education and with other federal programs as part of the White House Neighborhood Revitalization Initiative (NRI).The NRI is an interagency coordinating body that aligns place-based programs run by HUD, HHS, Justice, and the Department of the Treasury (Treasury) (see fig. 5). Coordination through NRI is more structured than internal coordination within Education, which, according to Promise program officials, occurs as needed. Liaisons from each grant program meet at biweekly and monthly NRI meetings. They have formed a program integration workgroup to coordinate program development, monitoring, and technical assistance for the grant programs included. For example, they conducted a joint monitoring trip to a neighborhood in San Antonio, Texas that has Promise, HUD’s Choice Neighborhood, and Justice’s Byrne Criminal Justice Innovation grants. In coordinating within Education and with NRI, Education’s efforts are focused on ensuring that grants are mutually reinforcing. These coordination activities include aligning goals, developing common performance measures where there are common purposes, and sharing technical assistance resources in areas where programs address similar issues or fund similar activities. (See table 4.) The Promise program has also participated in another place-based program led out of the White House Domestic Policy Council: the Strong Cities, Strong Communities initiative. This program sends teams of federal officials to work with distressed cities, providing them expertise to more efficiently and effectively use the federal funds they already receive. Education’s Promise program participates in initial on-site assessments of communities. Education staff assisted two of the participating communities by providing education expertise at their request. Promise Zones had to meet a number of requirements, including meeting certain poverty thresholds and having certain population levels. agencies and five other agencies in partnership with state and local governments, businesses, and non-profit organizations. Only areas that already had certain NRI grants or a similar rural or tribal grant were eligible to apply in the first round. As of January 2014, three Promise Neighborhoods implementation sites in San Antonio, Los Angeles, and Southeastern Kentucky were located in designated Promise Zones, which provide additional opportunity for coordination at the federal and local level. The Promise Neighborhoods program does, on occasion, coordinate with other individual federal agencies and programs outside of the NRI, but officials stated that the program is focused on deepening and broadening the communication it has with the five named NRI programs and Promise Zones. Promise Neighborhoods officials explained that they had concerns about spreading their coordination efforts too thinly given the large number of programs grantees may include in their strategies. In addition to Promise grants from Education, individual Promise Neighborhoods have access to a broad range of federal programs from other agencies, including many programs that are not part of NRI. However, Education has not developed an inventory of federal programs that could contribute to Promise program goals that it could share with planning and implementation grantees and use to make its own decisions about coordination across agencies. In recent work examining approaches used by interagency groups that successfully collaborated, we found that an inventory of resources related to reaching interagency goals can be used to promote an understanding of related governmentwide programs. Such inventories are useful in making decisions about coordinating related programs across agency lines and between levels of government, according to officials. We have also found that creating a comprehensive list of programs is a first step in identifying potential fragmentation, overlap, or duplication among federal programs or activities. As shown in table 5, the 12 implementation grantees we surveyed stated that they included a variety of federal resources in their Promise Neighborhoods strategies. AmeriCorps was included in 9 out of 11 implementation grantees’ strategies, followed by Head Start (8 of 12) and Education’s School Improvement Grants (6 of 11). None of these are part of NRI. Few grantees said that NRI programs were part of their Promise strategies. For example, four grantees said that a Choice Neighborhood grant was part of their Promise strategy, and three grantees stated that DOJ’s Byrne program was part of their strategy. Education officials attributed the small number of grantees that use HUD’s Choice program to the fact that few grantees have distressed public housing within their footprint that is eligible for this funding. Although Promise grantees conduct their own inventories of the existing federal and other resources in their neighborhoods in order to develop their strategies, two grantees we spoke with were unaware of some of the other federal programs that could contribute towards their strategies. For example, one implementation grantee we spoke to with concerns about school safety was unaware of DOJ’s Byrne Criminal Justice Innovation grant program. Another planning grantee who completed our survey commented that a list of related federal programs like the one in our survey would be especially useful to grantees who did not receive implementation grants. Education officials with the Promise program told us that sometimes grantees are unaware that the community is benefiting from certain federal programs because programs are renamed as they filter down through the state or local levels. Education officials said they emphasize to grantees the importance of reaching out to key partners to ensure they are aware of other federally funded programs in the neighborhood because their partners may be more knowledgeable about other sources of federal funding. While encouraging grantees to reach out to key partners is helpful, Education, through its coordination with other federal agencies, would likely have more knowledge about existing federal resources. Without a federal level inventory, Education is not well-positioned to support grantee efforts to identify other federal programs that could contribute to Promise program goals. Further, Education lacks complete information to inform decisions about future federal coordination efforts and identify potential fragmentation, overlap, and duplication. While Education is collecting a large amount of data from Promise grantees that was intended, in part, to be used to evaluate the program, the Education offices responsible for program evaluation— the Institute for Educational Sciences (IES) and Office of Planning, Evaluation, and Policy Development (OPEPD)—have not yet determined whether or how they will evaluate the program. One of Education’s primary goals for the Promise program, as described in the Federal Register, is to learn about the overall impact of the program through a rigorous program evaluation. Applicants are required to describe their commitment to work with a national evaluator for Promise Neighborhoods to ensure that data collection and program design are consistent with plans to conduct a rigorous national evaluation of the program and the specific solutions and strategies pursued by individual grantees. We have found that federal program evaluation studies provide external accountability for the use of public resources. Evaluation can help to determine the “value added” of the expenditure of federal resources or to learn how to improve performance—or both. Evaluation can play a key role in strategic planning and in program management, informing both program design and execution. Education requires implementation grantees to report annually on their performance using 15 indicators. The indicators include graduation rates, attendance, academic proficiency, student mobility, physical activity, and perceptions of safety. (See table 11 in appendix III.) Education contracted with the Urban Institute to provide guidance on how to collect data on the indicators, including data sources and survey techniques. According to Urban Institute officials, they used existing, validated measures whenever possible to ensure comparability across programs. Seven of twelve implementation grantees we surveyed said the guidance documents were extremely or very helpful, while four found it moderately helpful and one somewhat helpful. The Urban Institute has analyzed the data on the indicators for the first implementation year (the baseline), but Education has not decided whether it will make the first year’s data public because it was not collected in a consistent manner and not all grantees were able to collect all of the necessary data. According to Promise program officials there were inconsistencies in data collection because guidance was not available until February 2013, 13 months after 2011 implementation grants were awarded and over 1 month after 2012 implementation grants were awarded. Promise officials stated that they will use the performance data to target their technical assistance. They are still working with grantees to develop meaningful targets for the second implementation year. Urban Institute officials noted that these 15 indicators help grantees focus their efforts on the outcomes they are trying to achieve. In addition, Promise grantees are required to develop a longitudinal data system to collect information on the individuals served, services provided in the cradle-to-career continuum, and the related outcomes.are expected to use the longitudinal data to evaluate their programs on an Grantees ongoing basis and make adjustments to their strategies and services, as discussed later in this report. Grantees are also required to provide the longitudinal data to Education, which Education officials said they may use to create a restricted-use data set. However, Education currently does not have a plan for analyzing the data. In commenting on a draft of this report, Education stated it must first conduct a systematic examination of the reliability and validity of the data to determine whether it can be used for a descriptive study and a restricted-use data set. Education further stated that the restricted-use data set would only be made available to external researchers after Education determines that the data quality is adequate and appropriate for research; analyzes the data, taking into account privacy concerns; and determines whether to release its own report. In addition, officials from IES and OPEPD cited limitations and challenges to using the longitudinal data for program evaluation. An official from IES, the entity responsible for all impact evaluations conducted by Education, told us that it is not feasible to conduct an impact evaluation of individual program pieces or an overall evaluation of the Promise approach. The official offered three options for evaluation. IES’ preferred option is to conduct a rigorous impact evaluation with a control group obtained through randomized assignment to the program. However, Promise Neighborhoods are not designed to create such a control group. Another option would be for IES to use students or families who were not chosen to participate in an oversubscribed program as a control group, but an informal poll that IES took at a Promise Neighborhoods conference suggested that there were not a sufficient number of oversubscribed programs. A third option was to develop a comparison group of neighborhoods that did not receive a Promise Neighborhood grant. However, IES officials question whether such an approach would enable them to match neighborhoods that were comparable to Promise neighborhoods at the beginning of the grant period. Finally, IES noted that collecting additional data for a control group could be expensive. Education’s OPEPD is responsible for conducting other types of program evaluations. According to Education officials, it could conduct a more limited evaluation focused on outcomes without demonstrating that they are a direct result of the Promise program, but they have no specific plans to do so. An OPEPD official stated OPEPD is reluctant to commit to a plan because they have not yet seen the data and do not know how reliable or complete it will be. In addition, the official said that OPEPD is unsure about funding and that any comprehensive evaluations are expensive to carry out. By creating a restricted-use data set, OPEPD hopes that other researchers may have the funding to use the data to reach some conclusions about the program. The OPEPD official further explained that no one has ever evaluated a community-based approach like this one and that they hope researchers may have some ideas about how to do so. Researchers at the Urban Institute and within the Promise grantee community have proposed other options for evaluating the program. A researcher at the Urban Institute noted that random assignment is not the right approach for evaluating place-based programs. Instead, the researcher recommends a variety of other options for evaluating such programs, including approaches that estimate a single site’s effect on outcomes and aggregating those outcomes. This differs from the traditional program evaluation approach, which IES has considered, of isolating the effects of an intervention so that its effects can be measured separately from other interventions. While Education recognizes the importance of evaluating the Promise program, they lack a plan to do so. If an evaluation is not conducted, Education will have limited information about the Promise program’s success or the viability of the program’s collaborative approach. The Promise program generally requires grantees to use collaborative approaches. We found that grantees are following approaches consistent with those we have recognized as enhancing and sustaining collaboration with partners. The approaches we have previously identified include: Establishing common outcomes: Establishing common outcomes helps collaborating agencies develop a clear and compelling rationale to work together. Addressing needs by leveraging resources: Leveraging the various human, information technology, physical and financial resources available from agencies in a collaborative group allows the group to obtain benefits that would not be available if they worked separately. Tracking performance and maintaining accountability: Tracking performance and other mechanisms for maintaining accountability are consistent with our prior work, which has shown that performance information can be used to improve results by setting priorities and allocating resources to take corrective actions to solve program problems. The approaches are discussed below and in Tables 6 through 8. Grantees and partners provided examples of how they have collaborated through the Promise grant to deliver services and supports that are intended to improve educational and developmental outcomes. Grantees and their partners focused on delivering services at various steps along the cradle-to-career pipeline, including: Early learning supports: programs or services designed to improve outcomes and ensure that young children enter kindergarten and progress through early elementary school grades demonstrating age- appropriate functioning. K-12 supports: programs, including policies and personnel, linked to improving educational outcomes for children in pre-school through 12th grade. These include developing effective teachers and principals, facilitating the use of data on student achievement and student growth to inform decision-making, supporting a well-rounded curriculum, and creating multiple pathways for students to earn high school diplomas. College and career supports: programs preparing students for college and career success. These include partnering with higher education institutions to help instill a college-going culture in the neighborhood, providing dual-enrollment opportunities for students to gain college credit while in high school, and providing access to career and technical education programs. Family and community supports: these include child and youth physical, mental, behavioral and emotional health programs, safety programs such as those to prevent or reduce gang activity and programs that expand access to quality affordable housing. For examples of the services delivered and outcomes reported by grantees for each part of the cradle-to-career pipeline, see table 9 below. The Promise program has energized the 48 planning and implementation grantees and their partners to tackle the complex challenges facing impoverished neighborhoods together. While grantees said they will continue their efforts to build their Promise Neighborhoods, planning grantees faced challenges in sustaining their work over the long term without implementation grants. Planning grantees, especially those concerned about building trust with their communities and partners, may have been better served if Education had provided a more transparent, realistic picture of the fiscal reality of the Promise program and its potential impact on implementation grant funding. Lack of clear communication about the expectations Education had for planning grantees who did not receive implementation funding made it difficult for these grantees to develop specific plans to continue their efforts without future Promise funds. However, the reported small, yet tangible benefits that some communities pursued during the planning year—such as a safe place for children to play—increased momentum and built trust with community members. Encouraging such “early wins” could help all grantees and their partners build upon and improve their efforts, especially since implementation funding has proven scarce. Additionally, much of the information grantees use about what existing federal, state, and local programs and resources to incorporate into their strategies is gleaned through their needs assessment at the local level. Education has not provided grantees with comprehensive information about other federal resources that may be available to use in their Promise strategies. Education is best positioned to develop and share such an inventory of federal programs that relate to the goals of the Promise program. Without such an inventory, Education may be missing opportunities to better support grantees, find other federal programs for future coordination efforts, and identify potential fragmentation, overlap, and duplication at the federal level. One of the Promise program’s primary goals is to identify the overall impact of its approach and the relationship between particular strategies and student outcomes. Grantees are investing significant time and resources to collect data to assess the program, but Education lacks a clear plan for using it. Without evaluating program, it will be difficult for Education to determine whether it is successfully addressing the complex problem of poor student outcomes in impoverished neighborhoods. Finally, the Promise program is one of several place-based and collective impact programs being implemented across many federal agencies. Given the number of these initiatives, not evaluating the program limits Education and other agencies from learning about the extent to which model is effective and should be replicated. In order to improve grantees’ planning and implementation efforts, increase the effectiveness of grantee efforts to integrate and manage resources, and learn more about the program’s impact, we recommend that the Secretary of Education take the following three actions: 1. Clarify program guidance about planning and implementation grants to provide reasonable assurance that planning grantees are better prepared to continue their efforts in the absence of implementation funding. Additional guidance could include encouraging grantees to set aside a small amount of the grant to identify and deliver early, tangible benefits to their neighborhoods. 2. Develop and disseminate to grantees on an ongoing basis an inventory of federal programs and resources that can contribute to the Promise Neighborhoods program’s goal to better support coordination across agency lines. 3. Develop a plan to use the data collected from grantees to conduct a national evaluation of the program. We provided a draft of this report to the Department of Education for review and comment. Education’s comments are reproduced in appendix IV and are summarized below. Education also provided technical comments, which we incorporated into the final report as appropriate. Education outlined the steps it would take to implement our three recommendations, and provided its perspective on communicating expectations to grantees regarding future funding. Education did not explicitly agree or disagree with our findings. Regarding our finding that Education did not communicate clearly to planning grantees about its expectations for the grants, Education stated that in any given year it does not know and therefore cannot communicate the amount of funding available or the number of grant awards anticipated in the following year. We agree, and have clarified our finding in the report accordingly. Education stated that an early assessment of planning grantees’ likelihood of receiving implementation funding would have been premature. Education noted that although Congress has funded the Promise program for the past 5 years, in 4 of those 5 years it appropriated far less than the President requested, and for the last 3 years the program has essentially been level funded. Education further stated that this underscores the limited control that the program had over the number of implementation grants made. We recognize that federal agencies have faced a difficult fiscal climate over the past few years, particularly for discretionary programs. For that reason—and especially given the level at which the Promise program has been funded for the past 3 years—we believe it is even more important that Education be clear and transparent with planning grantees about historical fiscal realities of the Promise program and the implications this may have on future implementation grants. We also believe this situation highlights the need for planning grantees to have contingency plans, especially given Education’s expectations that grantees continue their efforts even in the absence of implementation funding. We further believe that this also underscores the importance of “early wins” to demonstrate what can be achieved when grantees and their partners work collaboratively, as such demonstrations can encourage them to continue their efforts even without implementation funding. In discussing its perspective on communicating expectations to grantees regarding future funding, Education stated that its Notifications Inviting Applications indicated that future funding was contingent on the availability of funds and that the program’s frequently asked questions document noted that implementation funding was not guaranteed and that planning grantees would have to compete for implementation grants. We believe that our report adequately reflects these communication efforts. However, as we reported, Education did not communicate to planning grantees that it expected them to continue their efforts even in the absence of implementation funding. Nor did Education communicate to implementation grant applicants that it expected them to be able to use their partners’ pledged matching funds even if they did not receive implementation grants. This lack of communication was evidenced by planning grantees’ lack of contingency plans and challenges they faced accessing the pledged matching funds, according to the grantees we interviewed. In response to our first recommendation, Education stated that it would continue to communicate to planning grant applicants that implementation funding is contingent on the availability of funds, and that it would provide more targeted technical assistance to planning grant recipients regarding strategies for continuing grantees’ efforts absent implementation funding. Education also stated that it would clarify to grantees that planning grant funds could be used to achieve early, tangible benefits. Regarding our second recommendation, Education stated that it would work with its technical assistance providers to create a mechanism to distribute a comprehensive list of external funding opportunities, programs and resources on a regular basis to better support the grantees’ implementation efforts. With regard to our final recommendation, Education stated that it will consider options for how and whether it can use the data collected from grantees to conduct a national evaluation. Education stated that as a first step it will conduct a systematic evaluation of the reliability and validity of the data, given issues that we and Education noted about inconsistencies in data collection and privacy concerns. In addition, Education stated that to date, it has not received sufficient funding to support a national evaluation. We agree that conducting evaluations can be costly. However, given that one of Education’s primary goals is to learn about the overall impact of the program through a rigorous program evaluation, we continue to believe that absent an evaluation, it will be difficult for Education to determine whether it is successfully addressing the complex problem of poor student outcomes in impoverished neighborhoods—one of its stated goals. Further, developing an evaluation plan would provide critical information about the resources required to conduct an evaluation, and could better inform future funding requests for such an evaluation. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Education and other interested congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at 617-788-0580 or NowickiJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are in appendix V. To better understand grantees’ experiences with the Promise Neighborhoods program, we conducted a web-based survey of all 48 planning and implementation grantees. The survey was conducted from August 23, 2013 through November 7, 2013. We received completed surveys from all 48 grantees for a 100 percent response rate. The survey included questions about the clarity and helpfulness of the application and peer review process, challenges sustaining efforts after the end of the planning grant, coordination of federal resources, collaboration with local organizations and associated challenges, the extent to which local coordination reduced duplication, overlap and fragmentation, if at all, the mechanisms organizations use to track the results of their efforts, the results of the grants, and the helpfulness of Education’s guidance and resources for the program. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors, such as variations in how respondents interpret questions and their willingness to offer accurate responses. We took steps to minimize nonsampling errors, including pretesting draft instruments and using a web-based administration system. Specifically, during survey development, we pretested draft instruments with five grantees that received planning and/or implementation grants. In the pretests, we were generally interested in the clarity, precision, and objectivity of the questions, as well as the flow and layout of the survey. For example, we wanted to ensure definitions used in the surveys were clear and known to the respondents, categories provided in closed-ended questions were complete and exclusive, and the ordering of survey sections and the questions within each section was appropriate. We revised the final survey based on pretest results. We took another step to minimize nonsampling errors by using a web-based survey. This allowed respondents to enter their responses directly into an electronic instrument and created a record for each respondent in a data file—eliminating the need for manual data entry and its associated errors. To further minimize errors, programs used to analyze the survey data were independently verified to ensure the accuracy of this work. Because not all respondents answered every question, we reported the number of grantees responding to particular questions throughout the report. In addition, we conducted site visits to 11 Promise grantees. We selected sites based on several factors, such as the type of grant awarded, the location of the grantees, and whether the Promise Neighborhood was urban or rural. The site visits provided opportunities to collect more in- depth information on the program and highlighted different types of grantees and approaches. We visited six implementation grantees in Boston, Massachusetts; Berea, Kentucky; Chula Vista, California; Indianola, Mississippi; Los Angeles, California; and Washington, DC. We visited five planning grantees in Campo, California; Lawrence, Massachusetts; Los Angeles, California; Nashville, Tennessee; and Worcester, Massachusetts. These include one tribal and two rural grantees. We also interviewed Education officials and technical assistance providers, as well as other experts who have worked with Promise grant applicants, such as the Promise Neighborhoods Institute. To determine how well the structure of Education’s Promise Neighborhoods grant program aligns with program goals and how Education selected grantees, using Education’s goals for the Promise program as criteria, we reviewed Education reports on place-based strategies; relevant Federal Register notices; and application guidance and training materials, including both the guidance available to applicants and to the peer reviewers regarding the technical evaluation/grant selection process. We reviewed agency information on applicants for implementation grants in the fiscal year 2011 and 2012 cycles, as fiscal years 2011 and 2012 were the only years in which Education awarded implementation grants. For both cycles, we analyzed application materials and technical evaluation documentation for a subset of implementation grant applicants—those that received planning grants in prior years. We compared the scores in each component of the application for both successful and unsuccessful applicants to identify criteria or factors that accounted for significant variation in total scores. We conducted a limited review of selected peer reviewer comments to gain more insight into the reasons for any differences. We interviewed Education officials about the process that the department used for the selection of both planning and implementation grantees. To determine how the Promise Neighborhoods program coordinated with other Education programs and with the other federal agencies, including those involved in the White House Neighborhood Revitalization Initiative (NRI), we reviewed documentation of the NRI’s efforts and interviewed agency officials participating in the NRI. We also interviewed cognizant officials at other agencies participating in the NRI. To assess Education’s approach to evaluating the success of the grants, we reviewed grant monitoring reports, Education’s performance measures, and related guidance for data collection for this program and interviewed agency officials responsible for evaluation, including technical assistance providers. To determine the extent to which Promise grants enabled collaboration at the local level, we used GAO’s prior work on implementing interagency collaborative mechanisms as criteria. We compared the Promise grants’ collaboration mechanisms to certain successful approaches used by select interagency groups and reviewed implementation grantees’ application materials. Our 11 site visits provided additional insight into how selected grantees align services supported by multiple funding streams and delivered by multiple providers. Using survey responses from all planning grantees, we determined whether they have continued their efforts, whether they have implemented any of their strategies, and what, if any, interim results they have identified, regardless of whether they received implementation grants. Site visits provided illustrative examples of interim benefits and challenges. We conducted this performance audit from February 2013 to May 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Location New York , NY (Harlem) Athens Clarke County Family Connection Inc. Clay, Jackson, and Owsley Counties, KY Boys & Girls Club of the Northern Cheyenne Nation Northern Cheyenne Reservation, MT Community Day Care Center of Lawrence, Inc. New York , NY (Brooklyn) United Way of Central Massachusetts, Inc. New York, NY (Brooklyn) New York, NY (Queens) New York, NY (Brooklyn) In addition to the contact named above, Elizabeth Sirois, Assistant Director; Jacques Arsenault; Aimee Elivert; and Lara Laufer made key contributions to this report. Also contributing to this report were James Bennett, Deborah Bland, Mallory Barg Bulman, Holly Dye, Alex Galuten, Jean McSween, Matthew Saradjian, and Sarah Veale. | Education's Promise Neighborhoods program is a competitive grant program with goals to improve educational and developmental outcomes for children in distressed neighborhoods. The grants fund community-based organizations' efforts to work with local partners to develop and evaluate a cradle-to-career continuum of services in a designated geographic footprint. As it is one of several federal programs using this model GAO was asked to review the program. This report examines: (1) the extent to which Education's strategy for awarding grants aligns with program goals; (2) how Education aligns Promise Neighborhoods efforts with other related programs; (3) how Education evaluates grantees' efforts; and (4) the extent to which grants have enabled collaboration at the local level, and the results of such collaboration. GAO reviewed Federal Register notices, applications, and guidance; surveyed all 48 grantees on the application process, coordination of resources, collaboration, and early results; visited 11 grantees selected based on geography and grant type; and interviewed Education officials and technical assistance providers. The Department of Education (Education) used a two-phase strategy for awarding Promise Neighborhoods (Promise) grants, and aligned grant activities with program goals. Education awarded 1-year planning grants to organizations with the potential to effectively align services for students in their respective neighborhoods. Planning grants were generally intended to enhance the grantees' capacity to plan a continuum of services. Through a separate competition, Education awarded 5-year implementation grants to organizations that demonstrated they were most ready to implement their plans. However, Education did not communicate clearly to grantees about its expectations for the planning grants and the likelihood of receiving implementation grants. As a result, some grantees experienced challenges sustaining momentum in the absence or delay of implementation grant funding. The Promise program coordinates with related federal efforts primarily through a White House initiative that brings together neighborhood grant programs at five federal agencies. The Promise program's efforts are focused on ensuring that grants are mutually reinforcing by aligning goals, developing common performance measures, and sharing technical assistance resources. While Promise grantees incorporate a wide range of federal programs in their local strategies, Education coordinates with a more limited number of federal programs. Officials told us that they do this to avoid spreading program resources too thin. Further, Education did not develop an inventory of the federal programs that share Promise goals, a practice that could assist grantees; help officials make decisions about interagency coordination; and identify potential fragmentation, overlap, and duplication. Education requires Promise grantees to develop information systems and collect extensive data, but it has not developed plans to evaluate the program. Specifically, implementation grantees must collect data on individuals they serve, services they provide, and related outcomes and report annually on multiple indicators. However, Education stated it must conduct a systematic examination of the reliability and validity of the data to determine whether it will be able to use the data for an evaluation. Absent an evaluation, Education cannot determine the viability and effectiveness of the Promise program's approach. The Promise grant enabled grantees and their partners to collaborate in ways that align with leading practices GAO previously identified for enhancing collaboration among interagency groups including establishing common outcomes, leveraging resources, and tracking performance. For example, Education required grantees to work with partners to develop common goals and a plan to use existing and new resources to meet identified needs in target areas. Grantees were also required to leverage resources by committing funding from multiple sources. Implementation grantees were required to collect and use data to track performance. Some planning grantees used a leading collaborative strategy not required by Education that produced early benefits. For example, several grantees and partners told us they completed easily achievable projects during the planning year to help build momentum and trust. Grantees told us that collaboration yielded benefits, including deeper relationships with partners, such as schools, as well as the ability to attract additional funding. However, grantees also said they faced some challenges collaborating with partners, particularly in overcoming privacy concerns related to data collection. GAO recommends that Education communicate grant expectations more clearly, identify federal resources that can contribute to the program's goals, and develop a strategy for evaluation. In commenting on a draft of this report, Education outlined the steps it will take to respond to recommendations. |
Decision support systems provide managers with information on business operations to assist decision-making. In the health care industry, these systems can provide managers and clinicians with data on patterns of patient care and patient health outcomes, which can then be used to analyze resource utilization and the cost of providing health care services. A number of vendors offer various types of decision support systems for the health care industry. Decision support systems can compute the costs of services provided to each patient by combining patient-based information on services provided during episodes of care with financial information on the costs and revenue associated with those services. For example, a private sector hospital performing cataract surgery collects information on the services provided to each patient, including the laboratory tests performed and the medications supplied, through its billing system. The hospital then collects revenue and cost information through its accounting systems, incorporating the collections from the insurance companies and applicable parties, such as Medicare, and expenditures for utilities and equipment. Using a decision support system to combine the clinical and financial information from the billing and accounting systems, the hospital can, for example, (1) calculate the specific cost of providing cataract surgery to a patient, (2) compare revenue received to costs incurred to determine profitability for this type of service, (3) compare costs incurred for different physicians and for surgery performed at different locations, (4) evaluate patient outcomes, and (5) perform analyses on ways to increase the quality of service, reduce costs, or increase profitability. Decision support systems can also support the comparison of patient care to predefined health care standards. In light of VHA’s lack of cost information on its hospitals and at the urging of your Committee, VHA conducted a study resulting in the acquisition of a decision support system. In September 1993, it awarded a contract to a commercial vendor to implement this system at 10 VA hospitals. VHA has since increased the total number of hospitals/sites currently implementing DSS to 38. As shown in figure 1, VA’s interest in acquiring DSS dates back to 1983. VA believes that DSS can help it effectively manage the cost and quality of health care provided to an estimated 2.5 million veterans annually. It also expects that DSS can help it remain a viable option in national health care delivery as the country moves towards a managed care environment focusing on cost-effectiveness. In implementing DSS, VA plans to use its existing information systems as the primary source of clinical and financial information. Although VA does not have a billing system analogous to the private sector, VA’s Decentralized Hospital Computer Program (DHCP) captures clinical workload information. VA also has accounting systems, the Personnel and Accounting Integrated Data, and Centralized Accounting for Local Management systems, which capture financial information on labor and supplies, respectively. The systems providing information to DSS, as shown in figure 2, are sometimes referred to as feeder systems. VA has also developed software to extract information from the feeder systems for input to DSS. Standard cost accounting information, such as allocations of indirect material and labor, are entered directly into DSS by hospital personnel. VA plans to implement DSS at 161 of its hospitals. This is a major undertaking for the vendor—the DSS project is the largest implementation of the vendor’s product to date. The vendor’s next largest implementation involved 20 private sector hospitals. As shown in figure 3, the implementation was initially planned over a 3-year period from January 1994 through December 1996. The implementation was recently slowed to allow VA to address critical implementation issues. As of June 1, 1995, VA had started implementing DSS at the 32 sites shown in figure 4. VA implemented another 6 hospitals in July. VA estimates that the total cost of implementing DSS will be about $132 million. Also, according to VA officials, as of July 20, 1995, they had spent about $30 million on the DSS project. Operational responsibility for the DSS project lies with the DSS Program Office in Kansas City, Missouri, which reports to the VHA Chief Financial Officer in Washington, D.C. The program office is responsible for coordinating and directing the implementation of DSS at the hospitals. In June 1995, the program office was headed by an acting project director,who was assisted by an acting deputy director for operations and a deputy director for information resource management. Assisting the program office on technical and quality issues are the deputy directors for technical implementation, data systems development, administration and resource management, and quality management. Under VHA’s March 1995 restructuring plan, which is expected to begin implementation on October 1, the program office will report to the new position of VHA Chief Information Officer, instead of the Chief Financial Officer. The Chief Information Officer will report to the Under Secretary for Health. While it is unclear at this time what role the Chief Financial Officer will have over DSS in the future, we believe that the DSS project will benefit from having this individual serve in an advisory capacity to the Under Secretary regarding DSS. To determine the potential benefits to be gained from VA implementing DSS and whether VA was pursuing a coordinated business strategy, we discussed these issues with the Under Secretary for Health, the VHA Chief Financial Officer, the DSS Project Director, the Director of Medical Information Resources Management Office (MIRMO), and representatives of private sector hospitals who use the vendor’s software. We also reviewed relevant VHA organizational plans and related management documents. To determine whether VA was establishing an adequate information infrastructure for DSS, we interviewed key DSS program officials in Washington, D.C.; Kansas City, Missouri; the National DSS Training and Education Office in Cleveland, Ohio; and the Technical Office located in Bedford, Massachusetts. We reviewed DHCP documentation, DSS processing information, and extract software design information. We had extensive discussions with MIRMO staff at the Information System Center in Birmingham, Alabama, involved in developing DSS extract software. Additionally, we met with staff at the Austin Automation Center in Austin, Texas, involved in processing DSS and DHCP information. To determine whether VHA was implementing DSS in a manner likely to maximize success, we visited VHA Medical Centers implementing DSS in Brockton, Massachusetts; New York, New York; Oklahoma City, Oklahoma; and Temple, Texas. We met with members of the DSS implementation team at each location as well as with top management personnel. We also compared VA’s effort to implement DSS against the best practices of leading private and public organizations for strategic information management identified in our publication entitled, Executive Guide: Improving Mission Performance Through Strategic Information Management and Technology, (GAO/AIMD-94-115, May 1994). We met with the vendor providing the DSS software and had discussions with other vendors who market similar software. We also had discussions with private sector health care providers who are using the vendor’s DSS software regarding their successes and problems in using DSS. We reviewed VA’s DSS implementation plans, the contract between VA and the vendor, and other DSS implementation project documents. In addition, we obtained oral comments on a draft of this report from the VHA Chief Financial Officer. His comments are summarized in the “Agency Comments and Our Evaluation” section of this report. We conducted our work between June 1994 and June 1995 in accordance with generally accepted government auditing standards. VA believes that DSS can provide it with an opportunity to gain control of its health care costs and increase the efficiency of health care delivery. With DSS, VA can calculate the cost of its health care services and use this information to assess its financial competitiveness in changing health care markets and improve its operations. For example, DSS can provide VA with a basis for maximizing third-party reimbursements through the Medical Care Cost Recovery (MCCR) program, improving the quality of health care delivery and allocating VHA resources on the basis of workload and local efficiencies. As we reported in December 1992, VA lacks information on the costs of providing health care services at each of its 172 hospitals. The availability of this information would be a major step toward financial accountability at VA. DSS is expected to provide hospital managers and health care providers with variance reports identifying areas for reducing costs and improving patient outcomes and clinical processes. Private sector hospitals already use decision support systems to achieve these objectives. For example, a private sector health care organization used information from its decision support system to reduce the costs associated with surgical supply packs. Staff there determined that the supply packs for a gall bladder procedure varied greatly in price, yet the higher cost packs did not improve patient outcomes. The organization was able to work with a vendor to reduce the price of the packs, saving $600,000 annually. According to representatives of another private sector health care organization, the vendor’s software enabled them to competitively price medical services and win contracts for these medical services. VA officials have also stated that DSS can help them collect more MCCR revenue by providing them with itemized cost information on which to base bills to third-party payers. An itemized bill would identify the costs of all medical services and supplies provided to the patient. Because VA currently lacks a cost accounting system, it is unable to prepare itemized bills. VA currently bills third-party payers on a flat-rate basis, regardless of the level of services provided or the cost of these services. For example, these payers are billed a flat rate of $1,350 per day for inpatient surgery, regardless of the type of surgery performed. As such, VA may not be billing third-party payers for all applicable costs associated with the patient. Aside from enhancing financial management, VA can use DSS to improve the quality of its health care services. For example, a private sector hospital used the vendor’s software to conduct a pilot study, comparing the treatment of heart failure patients with medical treatment standards defined by hospital experts and identified some treatment practices requiring modification by physicians. By adopting these treatment modifications, the hospital reduced its patient length of stay by an average of half a day and treatment costs by $250,000. According to a hospital official, mortality rates for these patients decreased by 2.6 percent, and readmissions decreased by 3.3 percent. When fully implemented, DSS should be able to provide valuable information on the costs of medical services and patterns of patient care and patient outcomes at the regional and national levels of VHA. DSS also has the capability to “roll-up” information to the corporate level. For example, a private sector organization with multiple hospitals used the vendor’s software to analyze the cost and profitability of its cardiology services at different locations. The decision support software enabled the manager to determine that one of its hospitals was purchasing expensive catheterization lab services, which reduced the profitability at that hospital. Similarly, VHA can use DSS to assess the relative performance of specific hospitals, both within and across its networks, and make necessary adjustments, such as reallocation of personnel resources, based on workload and local efficiencies. VHA can also use DSS, which allows it to model the patient case mix, volume, resource cost, and reimbursement changes, to assist in preparing its budget request. VHA has not developed a business strategy for effectively utilizing DSS as a management tool. Top managers have not defined the business goals to be achieved and measured using DSS, nor have they historically assumed the leadership necessary to ensure that DSS is successfully implemented. Lack of goals and leadership has put the DSS project at risk. Correcting these problems will not be easy because VA’s culture has not traditionally focused on the cost-effectiveness of hospital operations. The Under Secretary for Health, however, has recently demonstrated a strong commitment to DSS, and has taken initial steps to develop business goals and address cultural issues. Business goals are the foundation from which organizations develop strategic plans and strategic information management plans. These goals and associated plans guide the organization, determine how and where resources will be used, and provide a framework for using management tools such as DSS. Additionally, performance measures based on clearly defined goals provide a mechanism for identifying problems and assessing progress. The Under Secretary for Health told us that VHA does not have business goals. While he was unable to explain why VHA had not established business goals earlier in the project, the Under Secretary acknowledged the importance of business goals and said that they were a necessary prerequisite for developing performance measures. The lack of business goals for VHA has contributed to a lack of clear goals for the DSS project. Without clear business goals for DSS, the individuals involved with the project set their own personal objectives for DSS. These varied and sometimes conflicted. For example, the Project Director’s goal was simply to implement DSS at the 161 VA hospitals—how each hospital used DSS was up to each hospital. The objective of the Deputy Director for Technical Implementation was for DSS to accurately capture all clinical episodes of care. The Deputy Director for Quality Management’s goal was to achieve health care delivery improvements. Clear business goals could incorporate these objectives into a common framework to enhance VHA health care delivery. The senior information resource management (IRM) executive in an organization should play a critical role in seeing that business and information strategies are carefully coordinated to achieve organizational goals. The VHA organizational structure currently does not have an executive in a position to coordinate competing priorities between DHCP and DSS and effectively allocate limited IRM resources. For example, no one at VHA is setting priorities on the critical data elements needed in DHCP to support the DSS information infrastructure. As we discuss later, DSS requires some key data not currently captured in DHCP. To obtain the data from DHCP would require VHA top management to direct MIRMO, responsible for managing DHCP and related projects, to work on DSS priorities. However, the DSS Project Office and MIRMO report to different individuals. While both offices are organizationally under the Deputy Under Secretary for Health for Administration and Operations, this position has been vacant since January 31, 1995. As we have previously stated, VHA does not operate as a centrally managed health care system but as individual medical centers competing with each other to provide as wide a range of services as possible. Medical center directors’ performances are generally judged by what new facilities, services, and equipment they bring to the medical centers. During the initial DSS test period, several directors at one VA hospital did not see DSS as needed, were not interested in using DSS, and did not attempt to understand it. VHA is in the process of replacing its current regional system, which is comprised of four regions, with 22 Veterans Integrated Service Networks. VHA’s vision, according to its March 1995 restructuring plan, is to improve customer satisfaction, quality of care, access, and cost-effectiveness. The plan also states that “VHA has instilled certain behaviors and attitudes in its employees that are not compatible with this new direction.” The Under Secretary for Health recognizes that this transformation will take time, and that it will not be easy to change VHA’s decades-old culture. He further stated that if the veterans health care system is to remain viable it must fundamentally change its approach to providing care. We met with the Under Secretary for Health on March 10, 1995, and expressed our concerns about the lack of a comprehensive business plan for DSS, including a lack of leadership, goals, and performance measures. In response to our concerns, the Under Secretary for Health recently initiated steps to address the need for a coordinated business strategy for DSS. In a May 18, 1995, memorandum, he stated that DSS is one of VHA’s top information systems priorities. In addition, VHA plans to reorganize its IRM organizational structure. Specifically, it plans to place DSS and clinical feeder systems such as DHCP under the newly created position of VHA Chief Information Officer, which reports to the Under Secretary. These actions should help address the lack of leadership and competing IRM priorities. Finally, to help address some of the cultural issues, the Under Secretary for Health plans to implement a performance-based pay system. According to VHA’s restructuring plan, managers have historically been evaluated on a variety of inconsistent, often changing performance indicators that were frequently subjective. In contrast, the performance-based system is expected to hold field units and senior managers accountable for objective, measurable achievements. However, VHA has not yet articulated clear business goals or formulated a comprehensive business plan for DSS. Accurate and complete data from VA’s feeder systems are also critical to the success of DSS. Anything less will result in the “garbage in-garbage out” analogy. If inaccurate and incomplete data are input to DSS, DSS either will not be used because its data will not be credible, or managers and health care providers relying on DSS will make poor decisions based on incorrect data. We found that some of the key clinical data in DHCP and other clinical feeder systems are being collected completely and provided to DSS. For example, general laboratory test information is collected by DHCP’s laboratory software and provided to DSS. The lab software collects all needed pieces of information to define a billable event. Radiology is another clinical area in which DHCP collects all needed information for input to DSS. However, as shown in figure 5, we also found that some clinical data are incomplete, inaccurate, or inconsistent. For analysis and decision-making purposes, DSS must have information on all relevant clinical events or clinical workload. This information is equivalent to data describing the clinical services billed the payor in the private sector. For VA, the following information is needed from DHCP and other clinical feeder systems, to define a clinical billable event: patient identification; provider identification—who ordered or provided the treatment; time and date of treatment; description of service provided, for example, type of x-ray or lab test; and location where the service was provided. These data must be captured as needed to support the specific management decisions to be made using DSS. Our review showed that some clinical data provided to DSS from DHCP and other clinical feeder systems are incomplete or inaccurate. These problems stem from the fact that DHCP was not designed to capture itemized clinical billing information and feed this information to a billing or decision support system. Moreover, as we discussed earlier, VHA management has not identified specific decisions that DSS is to support, which is a critical factor in determining the data needed for DSS. Incomplete clinical data make it difficult to perform detailed analysis of clinical costs and activities and make appropriate improvements regarding cost-effectiveness and quality of care. Inaccurate clinical data could cause decisions to be made on the basis of erroneous information. Inconsistent clinical data make efforts to consolidate data across VA medical centers for corporate roll-up difficult. In addition, VHA needs to properly record clinical events in the correct time period and reconcile these events to ensure accuracy and completeness of data—a process called close out. The use of DSS is based on data flowing from the feeder systems to DSS on a monthly basis. Implicit in this transfer is the availability of accurate and complete information at the end of each month. To accomplish this, private sector facilities reconcile, or close out, their clinical workload records monthly. In contrast, VA closes out its records on an annual basis only, at the end of each fiscal year. Timely monthly close out would allow VA to know the cost of medical care provided within discrete time frames. This would facilitate periodic cost analyses, faster identification of trends and patterns, and more timely adjustment of health care practices— key DSS benefits. Failure to close out in a timely manner can adversely affect the usefulness of the data for decision-making and result in an administrative burden in making necessary adjustments to clinical workload records. For example, at VA’s fiscal year 1994 annual close out, it had to correct 8 million outpatient visits, out of a total of 23 million visits documented in its computerized outpatient clinic file. These records would need to be accurate and complete at the end of each month to support DSS. Adopting monthly close out will require fundamental restructuring of administrative activities at VA facilities. Finally, VA recognizes deficiencies with its financial systems that feed DSS. For example, the audits of VA’s consolidated financial statements for fiscal years 1994 and 1993, which were conducted by the Office of the Inspector General, reported that real property, plant, and equipment, and related depreciation account balances captured in the Centralized Accounting for Local Management system were unreliable because some accounting personnel at the VHA hospitals lacked sufficient training and oversight. Additionally, according to VA’s 1994 and 1993 Federal Managers’ Financial Integrity Act reports, the Personnel Accounting Integrated Data System cannot support mission-critical resource accounting functions necessary to support initiatives such as the National Performance Review, MCCR, and DSS. Without accurate and complete financial information, VHA cannot determine the cost of clinical events. VA is currently in the process of replacing its Centralized Accounting for Local Management system with a new system, known as the Financial Management System, which is expected to be fully functional in October 1995. During our March 10, 1995, meeting with the Under Secretary for Health, we expressed concerns about the integrity of data being provided to DSS, and the fact that VHA was going ahead with the scheduled DSS implementations in light of these problems and others, such as the lack of business goals and performance measures. We also suggested that VHA consider selecting a small number of sites to pilot the use of DSS by management before the system is implemented throughout VA. By piloting DSS at selected sites, VHA can (1) document the kinds of benefits that have been gained from using the system and (2) identify the problems that have occurred at the pilot test sites requiring top management’s attention and resolution. To address our concerns, the Under Secretary for Health took several actions. Specifically, in his May 18, 1995, memorandum, the Under Secretary reduced the number of additional hospitals scheduled for July implementation from 30 to 6 and established a team to ensure that some data elements are consistent across VA medical centers. In addition, he told us VHA plans to have a system in place to collect all billable outpatient care information by October 1996. While these actions begin to address some of our concerns, VHA still does not have a comprehensive plan to (1) identify what data are needed to achieve its business goals, (2) correct known flaws in its data, or (3) ensure that its feeder system software will collect the data needed by DSS. In addition, VHA has not identified specific DSS sites to pilot the use of the system as a management tool, documenting the benefits gained and the problems encountered from using DSS. Top management leadership is crucial if VHA is to effectively use DSS as a management tool—and DSS is essential if health care costs, quality, and reimbursement are to be effectively managed by VHA. A comprehensive, proactive DSS strategy that establishes business goals, leadership, and accountability would provide a framework within which management could improve health care delivery and cost recovery. This will not be easy and will take time. If VA is to achieve the benefits associated with DSS, it must change a decades-old culture in which business is conducted without enough focus on delivering high quality health care at minimal cost. In addition, for DSS to be useful for decision-making, it will require a complete and accurate information infrastructure. We are encouraged by the recent steps taken by the Under Secretary for Health. He has demonstrated an understanding of the issues and a willingness to respond. However, unless the Under Secretary’s actions are sustained and expanded to fully address the organizational and information infrastructure issues identified, including piloting DSS at a small number of sites, the millions of dollars invested in DSS to date are at risk. To increase the likelihood of DSS’ success, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to develop a comprehensive business strategy to identify the specific business goals (for example, reduction of cost in a specific area by a specific percentage), performance measures, and key decisions that DSS will be required to support; give high priority, by allocating appropriate resources, to establishing a complete, consistent, and accurate DSS information infrastructure; and identify data that are needed to support decision-making and ensure that these data are complete, accurate, consistent, and reconciled monthly. We also recommend that VA not implement DSS at any site beyond the 38 already begun until (1) defined business goals and a supporting information infrastructure supporting key decisions are in place and (2) VA’s capability to use DSS effectively as a management tool can be demonstrated. The VHA Chief Financial Officer provided oral comments to our draft report. He stated that the report was a fair, open, and honest assessment of VA’s efforts to implement DSS and that VA concurred with most of the recommendations in the report. VA concurred with our recommendation to establish a business strategy and specific business goals and has already taken several actions in this regard. The Under Secretary for Health recently established a work group on performance measures that will be a key component to this effort. In addition, VA recently appointed a new DSS Program Director, and his first priority is to draft and implement a detailed DSS business plan. The Under Secretary for Health also authorized establishing a DSS Corporate Advisory Board to oversee implementation of major systemwide policies and a Field Advisory Board to identify, prioritize, track, and resolve issues that arise from pilot site experience. VA also concurred with our recommendation to allocate appropriate resources to support the DSS information infrastructure. The new VHA Chief Information Officer will oversee both DHCP and DSS. This individual and the VHA Chief Financial Officer will address resource allocation needs relating to these systems. VA concurred with our recommendation to identify data needed to support decision-making and ensure that these data are complete, accurate, and consistent. However, VA did not agree that monthly reconciliations of clinical or workload records were necessary in light of its future data improvement plans. Specifically, VA plans to establish a national patient care database, which is expected to be implemented in October 1996, that would provide the agency with patient-unique encounter data so that individual changes can be monitored and used in an automatic reconciliation process. The VHA Chief Financial Officer stated that VA’s efforts to establish the database would be hampered if scarce resources were diverted to performing monthly reconciliations. To ensure accuracy and completeness of data, we believe that VA should reconcile its clinical workload records on a monthly rather than annual basis because VA plans to use DSS on a monthly basis. As we pointed out in this report, timely monthly reconciliation or close out would allow VA to know the cost of medical care provided within discrete time frames. This would also facilitate periodic cost analyses, faster identification of trends and patterns, and more timely adjustment of health care practices. Failure to close out in a timely manner can adversely affect the usefulness of data in DSS for decision-making purposes and result in an administrative burden in making necessary adjustments to clinical workload records at fiscal year-end. Furthermore, the VHA Chief Financial Officer did not clearly explain how the national patient care database would eliminate VA’s need to perform monthly reconciliations. We believe that until this database is implemented and providing complete and accurate data to DSS and until the automated reconciliation process is defined and operating effectively, VA should perform monthly reconciliations. Also, it is crucial that as VA begins to develop this database, it ensures that adequate internal control policies and procedures are in place so that the database captures, maintains, and generates timely, accurate data. Lastly, the VHA Chief Financial Officer did not agree that DSS should not be implemented beyond the 38 sites already begun until (1) defined business goals and a supporting information infrastructure are in place and (2) VA has demonstrated its ability to use DSS effectively. He indicated that VA has made progress and is confident that it will be able to effectively use DSS as a management tool. He also indicated that private sector hospitals that use DSS did not always have good, reliable data after 1 year and that expectations for VA’s implementation should be realistic. He felt that slowing down the implementation of DSS could jeopardize its success. While we agreed with the VHA Chief Financial Officer that private sector hospitals implementing DSS may not necessarily have complete and accurate data after 1 year, these hospitals generally have other controls in place, such as billing systems, which provide them some degree of financial accountability. VA, in contrast to the private sector, does not have a billing system. Also, no private sector hospital has implemented DSS at as many sites or as rapidly as VA plans to do. For example, one private sector health care organization told us that it implemented DSS at four sites over a period of 18 months. In addition, the likelihood of DSS’s success will be jeopardized by deploying it to 161 sites before a complete and accurate information infrastructure and effective procedures for its use are in place. We believe that a more appropriate course of action is to pilot DSS at a small number of sites capable of such an undertaking, ensuring that it is free from significant data integrity problems, that supporting procedures and controls are in place, and that the system is useful to management before it is deployed across 161 sites. We are sending copies of this report to the Chairman, Subcommittee on Veterans Affairs, Housing and Urban Development, and Independent Agencies, Senate Committee on Appropriations; the Secretary of Veterans Affairs; the Director, Office of Management and Budget; and other interested parties. Copies will also be made available to others upon request. Please contact me at (202) 512-6252 if you or your staffs have any questions concerning this report. Major contributors to this report are listed in appendix I. Janet M. Chapman, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Veterans Health Administration's (VHA) efforts to implement a medical decision support system, focusing on: (1) the kinds of benefits that such a system can provide the Department of Veterans Affairs (VA); (2) whether VA is pursuing the comprehensive business strategy needed to achieve these benefits; and (3) whether VA is establishing an adequate information infrastructure for the Decision Support System (DSS). GAO found that: (1) DSS has the potential to be an effective tool for improving the quality and cost-effectiveness of VHA health care operations; (2) VA has not formulated business goals or a comprehensive implementation strategy to clearly define how it will use DSS-generated information or prioritize its limited resources to implement DSS; (3) VA also has not established the information infrastructure needed to support DSS; (4) some of the data provided to DSS from other VA information systems are incomplete and inaccurate, limiting VA ability to make sound business decisions; and (5) sustaining top management leadership and commitment within VHA is critical to the successful implementation and use of DSS. |
Mr. Chairman and Members of the Caucus: I am pleased to be here today to discuss the serious and continuing threat of corruption to Immigration and Naturalization Service (INS) and U.S. Customs Service employees along the Southwest Border by persons involved in the illegal drug trade. The enormous sums of money being generated by drug trafficking have increased the threat for bribery. It is a challenge that INS, Customs, and other law enforcement agencies must overcome at the border. My testimony focuses on (1) the extent to which INS and Customs have and comply with policies and procedures for ensuring employee integrity; (2) an identification and comparison of the Departments of Justice’s and the Treasury’s organizational structures, policies, and procedures for handling allegations of drug-related employee misconduct and whether the policies and procedures are followed; (3) an identification of the types of illegal drug-related activities in which INS and Customs employees on the Southwest Border have been convicted; and (4) the extent to which lessons learned from corruption cases closed in fiscal years 1992 through 1997 have led to changes in policies and procedures for preventing the drug-related corruption of INS and Customs employees. This statement is based on our March 30, 1999, report on drug-related employee corruption. Our statement makes the following points: INS’ and Customs’ compliance with their integrity procedures varied. Justice’s Office of the Inspector General (OIG) and INS generally complied with investigative procedures, but Customs’ compliance was uncertain. Opportunities to learn lessons from closed corruption cases have been missed. across the Southwest Border. At the ports of entry, about 1,300 INS and 2,000 Customs inspectors are to check incoming traffic to identify both persons and contraband that are not allowed to enter the country. Between the ports of entry and along thoroughfares in border areas, about 6,300 INS Border Patrol agents are to detect and prevent the illegal entry of persons and contraband. The corruption of INS or Customs employees is not a new phenomenon, and the 1990s have seen congressional emphasis on ensuring employee integrity and preventing corruption. A corrupt INS or Customs employee at or between the ports of entry can help facilitate the safe passage of illegal drug shipments. The integrity policies and procedures adopted by INS and Customs are designed to ensure that their employees, especially those in positions that could affect the smuggling of illegal drugs into the United States, are of acceptable integrity and, failing that, to detect any corruption as quickly as possible. INS and Customs follow Office of Personnel Management (OPM) regulations, which require background investigations to be completed for new hires by the end of their first year on the job. Generally, the background investigations included a credit check, criminal record check, contact with prior employers and personal references, and an interview with the employee. Our review found that background investigations for over 99 percent of the immigration inspectors, Border Patrol agents, and Customs inspectors hired during the first half of fiscal year 1997 were completed by the end of their first year on the job. 1995 through 1997. In some instances, reinvestigations were as many as 3 years overdue. To the extent that a reinvestigation constitutes an important periodic check on an employee’s continuing suitability for employment in a position where he or she may be exposed to bribery or other types of corruption, the continuing reinvestigation backlogs at both agencies leave them more vulnerable to potential employee corruption. As of March 1998, INS had not yet completed 513 overdue reinvestigations of immigration inspectors and Border Patrol agents. Customs had a backlog of 421 overdue reinvestigations. Newly hired immigration inspectors, Border Patrol agents, and Customs inspectors are required to attend basic training. As part of their basic training, new employees are to receive training courses on integrity concepts and expected behavior, including ethical concepts and values, ethical dilemmas and decisionmaking, and employee conduct expectations. This integrity training provides the only required integrity training for all immigration inspectors, Border Patrol agents, and Customs inspectors. For Border Patrol agents, 7 of 744 basic training hours are to be devoted to integrity training. For Customs inspectors, 8 of 440 basic training hours are to be devoted to integrity training. INS immigration inspectors are to receive integrity training as part of their basic training, but it is interspersed with other training rather than provided as a separate course. Therefore, we could not determine how many hours are to be devoted specifically to integrity training. We selected random samples of 100 immigration inspectors, 101 Border Patrol agents, and 100 Customs inspectors to determine whether they received integrity training as part of their basic training. Agency records we reviewed showed that 95 of 100 immigration inspectors, all 101 Border Patrol agents, and 88 of 100 Customs inspectors had received basic training. According to INS and Customs officials, the remaining employees likely received basic training, but it was not documented in their records. Justice OIG, INS, and Customs officials advocated advanced integrity training for their employees to reinforce the integrity concepts presented during basic training. The Justice OIG, INS’ Office of Internal Audit, and Customs provide advanced integrity training for INS and Customs employees. While this advanced training has been available to immigration inspectors, Border Patrol agents, and Customs inspectors, they were not required to take it or any additional integrity training beyond what they received in basic training. Consequently, some immigration inspectors, Border Patrol agents, and Customs inspectors assigned to the Southwest Border had not received any advanced integrity training in over 2 years. Based on a survey of random samples of immigration inspectors, Border Patrol agents, and Customs inspectors assigned to the Southwest Border, we found that during fiscal years 1995 through 1997, 60 of 100 immigration inspectors agents received no advanced integrity training. In addition, 60 of 76 Border Patrol agents received no advanced integrity training during the almost 2 ½ – year period we examined. The Customs survey indicated that 24 of 100 Customs inspectors received no advanced integrity training during this period. The Departments of Justice and the Treasury have established procedures for handling allegations of employee misconduct. Misconduct allegations arise from numerous sources, including confidential informants, cooperating witnesses, anonymous tipsters, and whistle-blowers. For example, whistle-blowers can report alleged misconduct through the agencies’ procedures for reporting any suspected wrongdoing. INS and Customs have policies that require employees to report suspected wrongdoing. We selected five Justice OIG procedures to evaluate compliance with the processing of employee misconduct allegations. In a majority of the cases we reviewed, the Justice OIG complied with its procedures for receiving, investigating, and resolving drug-related employee misconduct allegations. For example, monthly interim reports were prepared as required in 28 of 39 opened cases we reviewed. In the remaining 11 cases, either some information was missing in interim reports or there were no interim reports in the case file. INS’ Office of Internal Audit complied with its procedures for receiving and resolving employee misconduct allegations in all of its cases. Because Customs’ Office of Internal Affairs’ automated case management system did not track adherence to Customs’ processing requirements, we could not readily determine if the Office of Internal Affairs staff complied with their investigative procedures. Customs’ automated system is the official investigative record. It tracks and categorizes misconduct allegations and resulting investigations and disciplinary action. The investigative case files are to support the automated system in tracking criminal investigative activity and contain such information as printed records from the automated system, copies of subpoenas and arrest warrants, and a chronology of investigative events. Based on these content criteria and our file reviews, the investigative case files are not intended to and generally do not document the adherence to processing procedures. Our analysis of the 28 closed cases revealed that drug-related corruption in these cases was not restricted to any one type, location, agency, or job. Corruption occurred in many locations and under various circumstances and times, underscoring the need for comprehensive integrity procedures that are effective. The cases also represented an opportunity to identify internal control weaknesses. The 28 INS and Customs employees engaged in one or more drug-related criminal activities, including waving drug-laden vehicles through ports of entry, coordinating the movement of drugs across the Southwest Border, transporting drugs past Border Patrol checkpoints, selling drugs, and disclosing drug intelligence information. The 28 convicted employees (19 INS employees and 9 Customs employees) were stationed at various locations on the Southwest Border. Six each were stationed in El Paso, TX, and Calexico, CA; four were stationed in Douglas, AZ; three were stationed in San Ysidro, CA; two each were stationed in Hidalgo, TX, and Los Fresnos, TX; and one each was stationed in Naco, AZ, Chula Vista, CA, Bayview, TX, Harlingen, TX, and Falfurrias, TX. The 28 INS and Customs employees who were convicted for drug-related crimes included 10 immigration inspectors, 7 Customs inspectors, 6 Border Patrol agents, 3 INS Detention Enforcement Officers (DEO), 1 Customs canine enforcement officer, and 1 Customs operational analysis specialist. All but the three had anti-drug smuggling responsibilities. Twenty-six of the convicted employees were men; 2 were women. The employment histories of the convicted employees varied substantially. In 19 cases, the employees acted alone, that is, no other INS or Customs employees were involved in the drug-related criminal activity. In the remaining nine cases, two or more INS and/or Customs employees acted together. Of the 28 cases, 23 originated from information provided by confidential informants or cooperating witnesses, and 5 cases originated from information provided by agency whistle-blowers. Prison sentences for the convicted employees ranged from 30 days, for disclosure of confidential information, to life imprisonment for drug conspiracy, money laundering, and bribery. The average sentence was about 10 years. Both the Justice OIG and Customs procedures require them to formally report internal control weaknesses identified during investigations, including drug-related corruption investigations involving INS and Customs employees. Generally, the Justice OIG and Customs’ Office of Internal Affairs, respectively, have lead responsibility for investigating criminal allegations involving INS and Customs employees. Reports of internal control weaknesses are to identify any lessons to be learned that can be used to prevent further employee corruption. The reports are to be forwarded to agency officials who are responsible for taking corrective action. Reports are not required if no internal control weaknesses are identified. In the 28 cases involving INS or Customs employees who were convicted for drug-related crimes in fiscal years 1992 through 1997, no reports were prepared. We concluded from this that either (1) there were no internal control weaknesses revealed by, or lessons to be learned from, these corruption cases or (2) opportunities to identify and correct internal control weaknesses have been missed, and thus INS’ and Customs’ vulnerability to employee corruption has not been reduced. Justice’s OIG investigated 13 of the 28 cases. The investigative files did not document whether procedures were reviewed to identify internal control weaknesses. Further, there were no reports identifying internal control weaknesses. According to a Justice OIG official, no reports are required if no weaknesses are identified, and he could not determine why reports were not prepared in these cases. Customs’ Office of Internal Affairs’ Internal Affairs Handbook provides for the preparation of a procedural deficiency report in those internal investigations where there was a significant failure that resulted from (1) failure to follow an established procedure, (2) lack of an established procedure, or (3) conflicting or obsolete procedures. The report is to detail the causal factors and scope of the deficiency. We identified eight cases involving Customs employees investigated by Customs’ Office of Internal Affairs. No procedural deficiency reports were prepared in these cases. Further, the investigative files did not document whether internal control weaknesses were identified. A Customs official said the reports are generally not prepared. Although the Justice OIG and Customs’ Office of Internal Affairs have lead responsibility for investigating allegations involving INS and Customs employees, the FBI is authorized to investigate INS or Customs employees. Of the 28 cases, the FBI investigated 7, involving 6 INS employees and 1 Customs employee. Under current procedures, the FBI is not required to provide the Justice OIG or Customs’ Office of Internal Affairs with case information that would allow them to identify internal control weaknesses, where the FBI investigation involves an INS or Customs employee. In addition, while Attorney General memorandums require the FBI to identify and report any internal control weaknesses identified during white-collar or health care fraud investigations, a Justice Department official told us that these reporting requirements do not apply to drug-related corruption cases. According to FBI officials, no reports were prepared in the seven cases because they were not required. The Justice OIG and Customs did not identify and report any internal control weaknesses involving the procedures that were followed at the ports of entry and at Border Patrol checkpoints along the Southwest Border. Our review of the same cases identified several weaknesses. corruption. These have included the random assignment and shifting of inspectors from one lane to another and the unannounced inspection of a group of vehicles. However, in the cases we reviewed, these internal controls did not prevent corrupt INS and Customs personnel from allowing drug-laden vehicles to enter the United States. In some cases, the inspectors communicated their lane assignment and the time they would be on duty to the drug smuggler, and in other cases, they did not. In one case, for example, an inspector used a cellular telephone to send a prearranged code to a drug smuggler’s beeper to tell him which lane to use and what time to use it. In contrast, another inspector did not notify the drug smuggler concerning his lane assignment or the times he would be on duty. In that case, the drug smuggler used an individual, referred to as a spotter, to conduct surveillance of the port of entry. The spotter used a cellular telephone to contact the driver of the drug-laden vehicle to tell him which lane to drive through. The drug smugglers’ schemes succeeded in these cases because the drivers of the drug-laden vehicles could choose the lane they wanted to use for inspection purposes. These cases support the implementation of one or more methods to deprive drivers of their choice of inspection lanes at ports of entry. At the time of our review, Customs was testing a method to assign drivers to inspection lanes at ports of entry. In 10 of 28 cases, drug smugglers relied on friendships, personal relationships, or symbols of law enforcement authority to move drug loads through a port of entry or past a Border Patrol checkpoint. In these 10 cases, drug smugglers believed that coworkers, relatives, and friends of Customs or immigration inspectors, or law enforcement officials, would not be inspected or would be given preferential treatment in the inspection process. For example, a Border Patrol agent relied on his friendships with his coworkers to avoid inspection at a Border Patrol checkpoint where he was stationed. In another case, an inspector agreed to allow her boyfriend to smuggle drugs through a port of entry. The boyfriend used his personal and intimate relationship with the inspector to solicit drug shipments from drug dealers. Two DEOs working together used INS detention buses and vans to transport drugs past a Border Patrol checkpoint. In two separate cases, former INS employees relied on friendships they had developed during their tenure with the agency to smuggle drugs through ports of entry and past Border Patrol checkpoints. inspected is such that they may not objectively perform the inspection. Nor do they have a written inspection policy for law enforcement officers or their vehicles. For example, our review of the cases determined that, on numerous occasions, INS DEOs drove INS vehicles with drug loads past Border Patrol checkpoints without being inspected. INS and Customs have not evaluated the effectiveness of their integrity assurance procedures to identify areas that could be improved. According to Justice OIG, INS, and Customs officials, agency integrity procedures have not been evaluated to determine if they are effective. The Acting Deputy Commissioner of Customs said that there were no evaluations of the effectiveness of Customs integrity procedures. Similarly, officials in INS’ Offices of Internal Audit and Personnel Security said that there were no evaluations of the effectiveness of INS’ integrity procedures. According to the Justice Inspector General, virtually no work had been done to review closed corruption cases or interview convicted employees to identify areas of vulnerability. Based on our review, one way to evaluate the effectiveness of agency integrity procedures would be to use drug-related investigative case information. For example, the objective of background investigations or reinvestigations is to determine an individual’s suitability for employment, including whether he or she has the required integrity. All 28 of the INS and Customs employees who were convicted for drug-related crimes received background investigations or reinvestigations that determined they were suitable. According to INS and Customs security officials, financial information, required to be provided by employees as part of their background investigations or reinvestigations, is to be used to determine whether they appear to be living beyond their means, or have unsatisfied debts. If either of these issues arises, it must be satisfactorily resolved before INS or Customs can determine that the employee is suitable. In addition, Justice policy provides for the temporary removal of immigration inspectors and Border Patrol agents if they are unable and/or unwilling to satisfy their debts. not been paid. They were not required to provide information on their assets. In comparison, Customs inspectors and canine enforcement officers were required to provide information on both their assets and liabilities, including financial information for themselves and their immediate families on their bank accounts, automobiles, real estate, securities, safe deposit boxes, business investments, art, boats, antiques, inheritance, mortgage, and debts and obligations exceeding $200. Our review of the 28 cases involving convicted INS and Customs employees disclosed that 26 of 28 employees were offered or received financial remuneration for their illegal acts. At least two were substantially indebted, and at least four were shown to be living beyond their means. For example, one of the closed cases we reviewed involved an immigration inspector who said he became involved with a drug smuggler because he had substantial credit card debt and was on the verge of bankruptcy. Given the limited financial information immigration inspectors are required to provide, this inspector might not have been identified as a potential risk. In another case, a mid-level Border Patrol agent owned a house valued at approximately $200,000, an Olympic-sized swimming pool in its own separate building, a 5-car garage, 5 automobiles, 1 van, 2 boats, approximately 100 weapons, $45,000 in treasury bills, 40 acres of land, and had no debt. Given the current background investigation or reinvestigation financial reporting requirements for Border Patrol agents, this agent would not have had anything to report, since he was not required to report his assets, and he had no debts to report. Our review of Customs files for eight of the nine convicted Customs employees showed that the Customs inspectors and canine enforcement officers had completed financial disclosure statements that included their assets and liabilities as part of their employee background investigations and reinvestigations. However, based on our case file review, Customs does not fully use all of the financial information. For example, according to a Customs official, reported liabilities are to be compared with debts listed on a credit report to determine if all debts were reported. Thus, their current use of the reported financial information would not have helped to identify an employee who was living well beyond his means or whose debts were excessive. Another source of evaluative information for INS and Customs could be the experiences of other federal agencies with integrity prevention and detection policies and procedures. For example, while INS’ and Customs’ procedures were similar to those used by other federal law enforcement agencies, several differences exist. According to agency officials, INS and Customs did not require advanced integrity training, polygraph examinations, or panel interviews before hiring, while the FBI, DEA, and Secret Service did have these requirements. Among the five agencies, only DEA required new employees to be assigned to a mentor to reinforce agency values and procedures. Since these policies and procedures are used by other agencies, they may be applicable to INS and Customs. During our review, the Justice OIG, INS, the Treasury OIG, and Customs began to review their anticorruption efforts. These efforts have not been completed, and it is too early to determine what their outcomes will be. require the Justice OIG to document that policies and procedures were reviewed to identify internal control weaknesses in cases where an INS employee is determined to have engaged in drug-related criminal activities; and require the Director of the FBI to develop a procedure to provide information from closed FBI cases, involving INS or Customs employees, to the Justice OIG or Customs’ Office of Internal Affairs so they can identify and report internal control weaknesses to the responsible agency official. The procedure should apply in those cases where (1) the Justice OIG or Customs’ Office of Internal Affairs was not involved in the investigation, (2) the subject of the investigation was an INS or Customs employee, and (3) the employee was convicted of a drug-related crime. require that Customs fully review financial disclosure statements, which employees are required to provide as part of the background investigation or reinvestigation process, to identify financial issues, such as employees who appear to be living beyond their means. The Department of Justice generally agreed with the substance of the report and recognized the importance of taking all possible actions to reduce the potential for corruption. However, Justice expressed reservations about implementing two of the six recommendations addressed to the Attorney General. First, Justice expressed reservations about implementing our recommendation that Border Patrol agents and immigration inspectors file financial disclosure statements as part of their background investigations or reinvestigations. Specifically, it noted that implementing financial disclosure “has obstacles to be met and at present the DOJ has limited data to suggest that they would provide better data or greater assurance of a person’s integrity.” We recognized that implementation of this recommendation will require some administrative actions by INS. However, these actions are consistent with the routine management practices associated with making policy changes within the agency. Therefore, the obstacles do not appear to be inordinate or insurmountable. Concerning the limited data about the benefits of financial reporting, according to OPM officials and the adjudication manual for background investigations and reinvestigations, financial information can have a direct bearing and impact on determining an individual’s integrity. The circumstances described in our case studies suggest that financial reporting could have raised issues for follow-up during a background investigation or reinvestigation. We recognize that there may be questions on the effectiveness of this procedure; therefore, this report contains a recommendation for an overall evaluation of INS’ integrity assurance efforts. those agencies, then the agencies are not in the best position to correct the abuses. The Department of the Treasury provided comments from Customs that generally concurred with our recommendations and indicated that it is taking steps to implement them. However, Customs requested that we reconsider our recommendation that Customs fully review financial disclosure statements that are provided as part of the background and reinvestigation process. Our recommendation expected Customs to make a more thorough examination of the financial information it collects to determine if employees appear to be living beyond their means. We leave it to Customs’ discretion to determine the type of examination to be performed. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Caucus may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the threat of corruption to Immigration and Naturalization Service (INS) and Customs Service employees along the Southwest Border, focusing on: (1) the extent to which INS and the Customs Service have and comply with policies and procedures for ensuring employee integrity; (2) an identification and comparison of the Departments of Justice's and the Treasury's organizational structures, policies, and procedures for handling allegations of drug-related employee misconduct and whether the policies and procedures are followed; (3) an identification of the types of illegal drug-related activities in which INS and Customs employees on the Southwest Border have been convicted; and (4) the extent to which lessons learned from corruption cases closed in fiscal years 1992 through 1997 have led to changes in policies and procedures for preventing the drug-related corruption of INS and Customs employees. GAO noted that: (1) some INS and U.S. Customs Service employees on the Southwest Border have engaged in a variety of illegal drug-related activities, including waving drug loads through ports of entry, coordinating the movement of drugs across the Southwest Border, transporting drugs past Border Patrol checkpoints, selling drugs, and disclosing drug intelligence information; (2) both INS and Customs have policies and procedures designed to help ensure the integrity of their employees; (3) however, neither agency is taking full advantage of its policies and procedures and the lessons to be learned from closed corruption-cases; (4) the policies and procedures consist mainly of mandatory background investigations for new staff and 5-year reinvestigations of employees, as well as basic integrity training; (5) while the agencies generally completed required background investigations for new hires by the end of their fist year on the job, reinvestigations were typically overdue, in some instances by as many as 3 years; (6) both INS and Customs provided integrity training to new employees during basic training, but advanced integrity training was not required; (7) Justice and Treasury have different organizational structures but similar policies and procedures for handling allegations of drug-related misconduct; (8) at Justice, the Office of the Inspector General is generally responsible for investigating criminal allegations against INS employees; (9) GAO found that the Justice OIG generally complied with its policies and procedures for handling allegations of drug-related misconduct; (10) at Treasury, Customs' Office of Internal Affairs (OIA) is generally responsible for investigating both criminal and noncriminal allegations against Customs employees; (11) Customs' automated case management system and its investigative case files did not provide the necessary information to assess compliance with investigative procedures; (12) INS and Customs have missed opportunities to learn lessons and change their policies and procedures for preventing drug-related corruption of their employees; (13) the Justice OIG and Customs' OIA are required to formally report internal control weaknesses identified from closed corruption cases, but have not done so; (14) GAO's review of 28 cases involving INS and Customs employees assigned to the Southwest Border, who were convicted of drug-related crimes in fiscal years 1992 through 1997, revealed internal control weaknesses that were not formally reported; and (15) INS and Customs had not formally evaluated their integrity procedures to determine their effectiveness. |
Acquisition planning activities should integrate the efforts of all personnel responsible for significant aspects of the acquisition. Generally, program and contracting officials share responsibility for the majority of acquisition planning activities. Although there is variation among agency processes, acquisition planning for individual contracts typically occurs in three phases (see figure 1): 1. Pre-solicitation: Acquisition planning activities generally begin when the program office identifies a need. The program office contacts its contracting office for guidance on how to develop and prepare key acquisition documents. The program office is primarily responsible for conducting market research, defining requirements in a document such as a statement of work, developing cost estimates, and developing a written acquisition plan, if required. The program office also obtains reviews and approvals as necessary from program leadership for the documents prepared. 2. Procurement request: The program office submits a formal request to acquire services, generally known as the request for contract package which can include a requirements document, a cost estimate, and an acquisition plan, if required. At this point, contracting and program officials work together to revise and refine these key planning documents as necessary, until the request for contract package is complete. The contracting officer, using the information submitted by the program office, considers the appropriate contract type and determines how competition requirements will be met. For awards that are expected to have limited or no competition, depending on the proposed cost, the agency or component competition advocate reviews and approves the key documents. 3. Solicitation: The contracting officer develops the solicitation, a document to request bids or proposals from contractors. The agency’s legal team and other stakeholders identified in agency or component policies may review the solicitation. Once appropriate reviews have been completed, the contracting officer publishes the solicitation, ending the acquisition planning process. Written acquisition plans, requirements development, cost estimating, incorporating lessons learned, and allowing sufficient time to conduct acquisition planning are several important elements of successful acquisition planning. The FAR directs agency heads to establish acquisition planning procedures, including those related to the selected elements described in table 1. We have previously reported that agencies have faced challenges with many of these elements of acquisition planning—requirements development, cost estimating, incorporating lessons learned, and allowing sufficient time to conduct acquisition planning. Table 2 describes illustrative findings from some of our prior work in these areas. In addition, we have sustained bid protests in part because agencies did not conduct adequate acquisition planning before awarding contracts on a sole-source basis, as the following examples illustrate: In 2005, we found that the Air Force initially attempted to place its requirement under an environmental services contract that, on its face, did not include within its scope the agency’s bilingual-bicultural advisor requirement. This obvious error constituted a lack of advance planning, which compromised the agency’s ability to obtain any meaningful competition and directly resulted in the sole-source award. In 2005, we also found that DHS’s Customs and Border Protection did not properly justify an $11.5 million sole-source bridge contract and failed to engage in reasonable advanced acquisition planning by not taking any steps to seek out other available sources, in spite of knowing many months in advance about a likely need. HHS, DHS, NASA, and USAID established policies that set different requirements and levels of oversight for acquisition planning to balance oversight with time and administrative burden. In particular, HHS, DHS, and NASA each require written acquisition plans that align closely with the elements defined in the FAR. USAID requires some documentation of acquisition planning, but, unlike the other agencies we reviewed, it does not require written acquisition plans for individual contracts. Guidance at all four agencies state that cost estimates and requirements documents should be prepared during acquisition planning, and DHS and NASA guidance include the consideration of lessons learned from previous contracts as part of acquisition planning. In addition, agencies have set different requirements for oversight, including who must review and approve acquisition planning documents. HHS, DHS, and NASA have implemented—in different ways—FAR requirements related to written acquisition plans. Written acquisition plans, in general, discuss the acquisition process, identify the milestones at which decisions should be made, and serve as road maps for implementing these decisions. Plans must address all the technical, business, management, and other significant considerations that will control an acquisition. HHS, DHS, and NASA have set different dollar thresholds for when written acquisition plans are required and provided for exceptions to those requirements for certain contracts, such as utility services available from only one source. Despite the varying levels of contract award activity and the different dollar thresholds for written acquisition plans across the three agencies, more than 80 percent of the dollars awarded on services contracts in fiscal year 2010 were above the written acquisition plan thresholds. Procurement officials from these three agencies explained that they established these thresholds to balance oversight with time and administrative burden. USAID, on the other hand, has not established: a dollar threshold at which written acquisition plans for individual contracts are required; guidance on the contents and format of written acquisition plans; or procedures for review and approval of written acquisition plans. See table 3 for written acquisition plan requirements by agency. For the three agencies that require written acquisition plans, policies and guidance for the contents of those plans closely aligns with the elements described in the FAR. For instance, these agencies require written plans to include an acquisition background and objectives, which details a statement of need, cost goals, delivery requirements, trade-offs, and risks. Agencies also require acquisition plans to include a plan of action, which details prospective sources, source-selection procedures, and logistics and security considerations. Program and contracting offices generally share responsibility for preparation of written acquisition plans. Contracting officials told us they find written acquisition plans to be valuable roadmaps to help ensure thorough planning. In two of our selected contracts at DHS and NASA, written plans were prepared even though they were not required. A DHS program official told us that he completed a written acquisition plan because he was inexperienced in working with contracts. He reviewed the requirements in the FAR and developed a written plan on his own initiative to ensure he thoroughly completed planning, even though the contract was valued at $4.5 million—below the $10 million threshold. In addition, components at NASA and HHS told us they expanded the use of written acquisition plans beyond agency requirements. For example: NASA requires written acquisition plans for contracts valued at $10 million and above, and its component, Johnson Space Center, requires a short form acquisition plan for contracts valued between $5 million and $10 million. Procurement officials explained that the short form improves documentation of acquisition planning and serves as a training aid for less experienced staff. HHS requires written acquisition plans for contracts valued at $500,000 and above, and almost half of Centers for Disease Control and Prevention’s contract awards over the current simplified acquisition threshold of $150,000 fell below the HHS threshold of $500,000 in fiscal year 2010. As of this year, Centers for Disease Control and Prevention requires written plans for all contracts over $150,000, unless covered by a regulatory exception. A procurement official explained that contract team leaders initiated lowering the threshold, noting that reviewing these actions allows the procurement office a more comprehensive look at overall acquisition planning, as well as the ability to better plan for more small business participation, use existing Centers for Disease Control and Prevention contracts, and engage departmental strategic sourcing acquisition vehicles. USAID has not established a dollar threshold or content and format guidance for written acquisition plans for individual contracts. USAID does require some documentation of acquisition planning, including documents used for annual acquisition forecasting and program planning. USAID requires implementation plans for its programs and foreign assistance objectives that should describe plans for competition or waivers of competition and expected completion dates. Implementation plans are unlike written acquisition plans at the other agencies we reviewed; they are not required to include statements of need, cost goals, or source- selection procedures, among other things. Implementation plans are not prepared for individual contracts: rather, they include multiple types of obligations, including contracts, grants, cooperative agreements, and government-to-government agreements. USAID Office of Acquisition and Assistance officials told us these plans are required regardless of dollar value. However, the policies are not clear about what is required to be documented in implementation plans, the format of the documentation, or who is to perform these tasks. In addition, USAID programs develop activity approval documents that describe funded activities and may include multiple procurement instruments. USAID policy does not require contracting officers to use activity approval documents as part of planning for a specific contract. Lastly, USAID develops milestone plans for individual contracts using its procurement data system. None of the six contract files we reviewed at USAID—with award values between $1.5 million and $750 million—contained a written acquisition plan. USAID contracting and program officials told us that clearer guidance about requirements for written acquisition plans would be useful. For example, a contracting officer involved in a $3.2 million operational support contract told us it would be very helpful if USAID would implement more specific acquisition planning formats, similar to ones he provided to program officials when he worked at another federal agency. USAID officials said the agency plans to review the benefits of more consolidated guidance and documentation requirements for acquisition planning. The four agencies we reviewed have guidance that requirements documents and cost estimates be prepared during acquisition planning, and DHS and NASA guidance include incorporating lessons learned from previous contracts. Requirements documents are generally part of a request for contract at all four agencies or their components. Requirements documents should define requirements clearly and concisely, identifying specific work to be accomplished. They define the responsibilities of the government and the contractor and provide objective measures to monitor the work performed. They can be, for example, a statement of work, statement of objectives, or performance work statement. HHS and DHS generally require cost estimates before solicitation. NASA requires written acquisition plans to include cost estimates and one component we reviewed requires cost estimates whether or not a written acquisition plan is prepared. HHS, DHS, and NASA components we reviewed have guidance available to help program officials prepare cost estimates. USAID requires cost estimates for programs. Cost estimates record the government’s assessment of a contract’s most probable cost and can be used to make requirements trade-offs in the acquisition planning process. Following acquisition planning, the cost estimate can be used to check the reasonableness of potential contractors’ proposals and negotiate prices. DHS and NASA guidance include the consideration of lessons learned in acquisition planning but HHS and USAID have not established specific procedures or requirements that lessons learned be considered as part of acquisition planning. NASA encourages incorporation of lessons learned into directives, standards, and requirements, among other aspects of acquisitions. In particular, source evaluation boards are encouraged to document lessons learned, which may include aspects of acquisition planning, and provide them to procurement office leadership. NASA guidance further recommends discussions of these lessons learned at planning meetings for subsequent contracts. After we discussed our preliminary findings with DHS officials, DHS revised its acquisition planning guidance in June 2011 to require written acquisition plans to include a discussion of how lessons learned from previous acquisitions impact the current acquisition, or provide a rationale if historical information was not reviewed. HHS and the NASA components we reviewed require that written acquisition plans include an acquisition history. These histories may simply describe specific characteristics of previous related contracts, including contract type, total cost, and contractor. None of the agencies require that an acquisition history include knowledge gained from previous contracts or potential issues that should be addressed in a new contract. In addition, none of the agencies have procedures in place to assure that the contracting officer reviews the acquisition history when written plans are not required to be prepared. Agencies’ requirements for who must review and approve acquisition planning documents vary, particularly for written acquisition plans. For instance, written plans for contracts above certain dollar thresholds at DHS and NASA require headquarters-level review, and plans for contracts below those thresholds are reviewed at the component level. At HHS, DHS, and NASA, information on estimated costs is reviewed as part of the review of written acquisition plans. Table 4 describes the written acquisition plan review requirements at HHS, DHS, and NASA. Because USAID does not require written acquisition plans for individual contracts, there are no review and approval requirements. Agencies have different policies for reviewing requirements documents, specifically which stakeholders should be involved and whether program leadership should approve requirements. The agencies’ processes for stakeholder involvement and reviewing requirements follow: HHS: The head of the sponsoring program office must conduct a thorough technical review of the requirements document that is attached to the written acquisition plan. DHS: In developing requirements, acquisition planners should consult with appropriate representatives from contracting, legal, fiscal, small business, environmental, logistics, privacy, security, and other functional subject matter experts, as needed. NASA: Program managers and technical authorities for Engineering, Safety, and Mission Assurance, and Health and Medical must review requirements documents. USAID: The program official responsible for a specific activity drafts the requirements document as part of the request for contract. USAID procurement officials explained that requirements documents usually undergo multiple rounds of editing as the contracting office prepares the solicitation. In addition to the review and approval processes of specific acquisition planning documents, two agencies have processes to review selected proposed contracts before solicitations are published. HHS: At National Institutes of Health, a Board of Contract Awards is to conduct pre-solicitation reviews of a sample of about 10 percent of each institute’s contracts. According to officials at Centers for Disease Control and Prevention, the Office of Policy, Oversight, and Evaluation is to conduct pre-solicitation reviews of all contracts valued at over $5 million. In addition, HHS procurement officials reported they are developing an acquisition oversight framework to conduct headquarters-level review of high-dollar and high-risk contracts at key decision points in the acquisition life cycle. USAID: Contracts with estimated values above $10 million are required to go to a Contract Review Board before solicitation. However, this requirement is sometimes waived, although there are no clear criteria for when waivers are granted. One of our selected contracts at USAID, a multiple award contract valued at up to $750 million, received a “gold star” pre-solicitation review waiver based on the contracting officer’s reputation and experience and, according to officials, because the template used for the solicitation had been used before. According to documentation in the contract file, this contract was subject to a bid protest when the published evaluation criteria were not applied because vendors were confused by the requirements in the solicitation. While multiple factors affect when bid protests are filed and whether they are sustained, they are denied or the agency takes corrective action, several contracting officials told us they consider successful bid protests an indicator of inadequate acquisition planning. Agencies did not always use the acquisition planning process to the fullest extent possible to develop a strong foundation for the contracts we reviewed, but some have identified ways to encourage improved acquisition planning. We found that important planning steps were not performed at all, could have been used more fully to improve acquisition planning, or were not documented for future use. (See appendix V for detailed information on the 24 cases we reviewed.) In particular, we found that agencies faced challenges defining their needs, documented cost estimates to varying degrees, and documented lessons learned to a limited extent. We identified several practices agencies use to support program staff with acquisition planning activities, including hiring personnel who specialize in procurement business issues and cost and price analysis, and providing detailed templates to assist in preparing key documents. In five of our selected contracts at three agencies, programs faced challenges defining their needs in the acquisition planning phase, in some cases resulting in delays in awarding contracts. Four of these contracts were time-and-materials or cost-reimbursable, which are riskier contract types for the government. For the fifth contract, NASA incorporated into acquisition planning known challenges defining its needs, specifically the possibility of future requirements changes. Well-defined requirements are critical to ensuring clear communication about what the government needs from the contractor providing services. Program and contracting officials at the four agencies we reviewed noted that this can be a challenging part of acquisition planning and is a shared responsibility between program and contracting officials. Program officials must ensure that they have determined exactly what they need to acquire, have incorporated input from stakeholders, and have made trade-offs to ensure affordability. Contracting officials must ensure that the stated requirements are clear and consistent with law and regulation. In four of our selected contracts, agency requirements were difficult to define and, in some cases, changed after acquisition planning ended. For a $13.6 million follow-on contract at DHS, the program manager responsible for developing requirements during acquisition planning overestimated the level of advertising services needed to support recruitment efforts without coordinating with program leadership. The assistant commissioner of human resources later determined that less advertising support was really needed and approved approximately half the requested funding. It took several months for the program to finalize the support required, resulting in amendments to the published solicitation after the acquisition planning phase ended, and delaying contract award by 3 months. For an $18.7 million contract at USAID, the program official said that it was challenging to incorporate the needs of multiple stakeholders in areas outside her area of responsibility, and to forecast their demand for the services over a 5-year period. Sixteen months after the contract was awarded, the agency had to increase the contract’s ceiling by $10 million—an increase of over 50 percent—due to greater-than-anticipated demand for services. For a set of follow-on contracts awarded in 2009 valued at $750 million, USAID had a 10-year history of difficulty predicting growth in demand for anticorruption program services. Beginning in 1999, three previous sets of contracts for these services reached their cost ceilings quickly and required new contracts before their planned expiration. For a $3.2 million contract at USAID, the contracting officer told us the program had a difficult time determining program and operational support requirements because program staff members were turning over during a change in presidential administration. He noted that there were a number of unknowns during acquisition planning and it was not possible to estimate the level of support required, so the agency awarded a time-and-materials contract. According to the contracting officer, the agency did not prepare a justification to use this contract type, as required by the FAR. In one case, NASA incorporated the possibility of future changes to requirements into its acquisition planning, although the program decisions driving these changes would not be made until after planning was completed. The written acquisition plan for the $180 million contract for selected contract services related to the International Space Station notes in its risk assessment that the retirement of the space shuttle created a challenge to defining requirements specifically enough to use an entirely firm-fixed-price contract. NASA modified the contract 1 year after award to incorporate tasks being transferred from other programs, including the ending space shuttle program, as anticipated. For our selected contracts, agencies frequently did not fully use the cost estimating process to inform acquisition planning. We have previously reported that a well-documented cost estimate is supported by detailed documentation that describes how it was derived, capturing the source of the data used and the assumptions underlying the estimate. The 24 contract files we reviewed had varying levels of documentation for cost estimates prepared during acquisition planning. Specifically, 8 of the contracts fully documented cost estimates and the rationale behind them, 14 of the remaining contracts only partially documented the rationale for the cost estimates, and 2 contracts did not document cost estimates prepared during acquisition planning. (See figure 2.) In acquisition planning, documentation of estimated costs, typically prepared by the program office, ensures that contracting officials can understand the basis for the estimate and how to use the estimate in later steps of the acquisition process. It is unclear what information was available to USAID contracting officers during acquisition planning in the two contracts without documented cost estimates. In many cases at all four agencies, the program office did not document the rationale for estimated costs—including sources of underlying data and assumptions—limiting the ability of the contracting office to evaluate the reliability of estimates and reducing opportunities to improve estimates for future contracts. In addition, not fully documenting cost estimates limits information sharing. While contracting officials told us they have informal conversations with program officials about the rationales for estimated costs, if these conversations are not documented, the information cannot be carried forward to provide insights for any subsequent contract. This is particularly important given the frequent staff turnover in the acquisition workforce: In 8 of the 16 cases we reviewed for which a cost estimate was either not documented at all or not fully documented, either the program official or contracting official involved in acquisition planning could not be reached because they had left that office. For instance, DHS did not document the sources or assumptions for an $11 million public service campaign follow-on contract. Because the contracting officer involved in acquisition planning left the agency, DHS could not identify a contracting official who was familiar with the planning for that contract. As a result, a future contracting officer or program staff developing the cost estimate for this recurring need will not have this information. Documenting the rationale for cost estimates is particularly important to help ensure the information is available when planning for follow-on contracts. In 16 of the selected contracts we reviewed for which cost estimates were not fully documented, 11 were follow-on contracts. Program officials’ knowledge of how to develop a cost estimate varied. USAID does not have guidance on when and how to use cost estimates for individual contracts. For three of the six cases we reviewed at USAID, the program officials we spoke to had limited knowledge about how and when to complete cost estimates. As a result, contracting officials were not in a strong position to use one of the tools available to help make a determination of fair and reasonable costs or prices, as the following examples describe. A USAID program official responsible for acquisition planning for one contract said that she did not feel knowledgeable enough to prepare a cost estimate on her own, she did not receive sufficient assistance from the contracting office, and she was not aware of any guides or resources to help her complete a cost estimate. Further, the program official said she communicated with and received inconsistent guidance from 12 to 15 different contracting personnel during the course of acquisition planning. This $18.7 million contract was later modified to increase its total value by more than 50 percent. Another USAID program official, assigned to plan for a $1.5 million contract, described her efforts to prepare a cost estimate as “flying blind” because, at the time, she did not understand how the cost estimate related to the acquisition process, and she did not know that the cost estimate needed to be completed within certain time frames. Contracting officials’ views differed about the importance of developing accurate cost estimates during acquisition planning. Several contracting officials at HHS said they did not think cost estimates were as important during acquisition planning, noting that they rely heavily on market competition after solicitation to establish a fair and reasonable price. Although competition can aid in establishing fair and reasonable prices, the extent of competition varies in contracts. According to a procurement official at DHS, an accurate cost estimate developed during acquisition planning—before vendors propose prices—provides a more realistic basis for assessing whether any of the offers or bids received are within an acceptable range. Moreover, by delaying attention to cost estimating until after acquisition planning is completed, agencies may be limited in their use of estimates for planning purposes other than for establishing fair and reasonable costs or prices. By rigorously estimating costs during acquisition planning, agencies may be better positioned to assess whether they can afford the services proposed and make trade-offs to better distinguish between needs and wants. For instance, in one case we reviewed at HHS, a program official told us she used the cost estimating process to communicate what level of services the program could afford to purchase given its budget limitations. Because the program official responsible for planning had a clear understanding of estimated costs, she was able to work with her office to narrow requirements to only the highest priority elements. Agencies documented lessons learned in a quarter of the contracts we reviewed. However, in other cases where agencies did not document lessons learned, they may have missed opportunities to improve acquisition planning based on previously acquired knowledge and experience. Contracting officials from several agencies told us that while they address known major issues encountered in previous contracts, lessons learned are considered to the extent that time allows. Acquisition planning is required for all acquisitions, including follow-on acquisitions. Of our 24 selected contracts, 17 were follow-on awards to previous contracts for similar services, which could have informed acquisition planning. Contracting officials explained that follow-on contracts are frequently “cookie cutters” of the previous contract with very few changes, as the following examples show. For a $29 million NASA follow-on contract for facility maintenance and operation, the program official involved did not consider the preparation for this contract to be acquisition planning because there were so few changes from the previous contract. For a follow-on contract for advertising services to promote emergency preparedness at DHS, program officials told us that they did not identify or incorporate lessons learned from the previous contract during acquisition planning for the current contract. A written acquisition plan is one opportunity when program and contracting officials could document important lessons and other considerations for future use, though this was not required at the agencies that use written acquisition plans. Of the 12 follow-on contracts for which written acquisition plans were prepared, 6 of the written plans from HHS, DHS, and NASA included information about lessons learned from the previous contracts. For example, the written acquisition plan for a $375 million disaster response contract at DHS discussed that a new strategy would be employed in the current contract because the ordering process used under the previous contract hindered rapid response in an emergency. For the other six follow-on contracts, HHS, DHS, and NASA did not document lessons learned in written acquisition plans. However, program and contracting officials told us that knowledge gained in the previous contract was incorporated in some of these cases. For instance: For a $2.5 million educational marketing follow-on contract at HHS program officials explained that they had experienced issues with obtaining invoices in a timely manner during the predecessor contra which led them to use an incentive fee arrangement in the current contract. This issue is not documented in the discussion of the acquisition history or the selection of cost type in the written acquisition plan for the contract. ct, For a $125 million set of follow-on ordering agreements for background investigations at DHS, a program official said that key lessons learned included the type of contract vehicle used and consistency among multiple contractors’ requirements documents. These issues are not documented in the written acquisition plan. Due to staff turnover, this type of institutional knowledge is lost if not documented. In 11 of the 17 follow-on contracts we reviewed, either the program or contracting official involved in acquisition planning was no longer available to provide information about the process at the time of our review. In one case we reviewed at HHS, neither the contracting official nor program official involved in acquisition planning for the $210 million contract for management and technical consulting services were still at the agency. An HHS contracting official told us they maintained a running list of issues to address in follow-on contracts to ensure lessons learned are not lost. In addition, because the contract file contained significant documentation of the early planning process, we were able to readily understand the decisions they had made and the lessons they learned. We identified several practices agencies use to support program staff with acquisition planning activities, including hiring personnel who specialize in procurement business issues and cost and price analysis. For instance, both DHS components we reviewed have hired business specialists who focus on procurement issues to assist program offices with acquisition planning tasks, which alleviates the burden on contracting officials. For a $125 million contract at Customs and Border Protection, program officials obtained significant assistance from the business specialist group in developing the cost estimate and requirements document. In this instance, the cost estimate was well documented, the requirements document was clear, and the requirements have not changed since the contract was awarded. Procurement officials at Federal Emergency Management Agency said awareness of their acquisition business specialists has been raised by conducting “road shows” within the contracting organization and individual meetings with key decision makers on the program side. As a result, these specialists have been used increasingly in recent months, and some program offices have provided office space so they can work side-by-side with program staff. In addition to these business specialists, procurement officials said Federal Emergency Management Agency has also hired 10 full-time permanent employees to aid in planning, providing acquisition guidance and consulting support to program offices. A contracting official told us that some centers at HHS’s Centers for Disease Control and Prevention have similar support specialists in their business services units and that this support helps technically-minded scientists in the program offices with the procurement process. In addition, contracting and program officials at Customs and Border Protection, Johnson Space Center, and National Institutes of Health noted the value of having in-house cost and budget specialists to aid program officials in developing cost estimates. For example, a Customs and Border Protection contracting official noted that program staff for a $125 million background investigations contract had access to an in- house cost and price analysis group to obtain assistance with developing a cost estimate. Other advisory offices also assist program staff in developing acquisition planning documents. For example, NASA’s Johnson Space Center has a Source Evaluation Board Office, which officials said plays a support role for the program and contracting offices during acquisition planning in addition to the subsequent source evaluation process. Agencies have tools for program staff to use in developing cost estimates. The components we reviewed at HHS, DHS, and NASA have established guides for program staff that made clear when and how to complete a cost estimate. At HHS and USAID, contracting officials said they shared informal templates and sample cost estimates among themselves for use in assisting program officials. Several contracting officials we spoke to said they had either developed their own cost estimation templates or had templates they routinely provided to program officials as a clear model. USAID procurement executives noted that more training in developing cost estimates would be useful for their program officials. Contracting officials told us that they provide support when program officials do not have acquisition planning experience, but the contracting workforce has limited capacity to assist programs with planning activities given their workload demands and workforce constraints. At NASA’s Johnson Space Center, contracting officials are co-located with program offices to encourage frequent interactions throughout the acquisition lifecycle. However, contracting officials at all four agencies told us they have many competing demands, such as planning for higher priority contracts and awarding and administering contracts. In one case at HHS, program officials submitted their request for a follow-on contract for logistics and meeting support services to contracting officials nearly 3 months before the previous contract was to expire, but contracting officials did not respond due to workload. The contract was awarded under noncompetitive procedures for $4.1 million and 6 months later than planned, requiring an extension to the previous contract—which had been awarded under competitive procedures. In two other cases, one at USAID and one at DHS, program officials told us they had to substantially rework certain acquisition planning steps due to turnover in the contracting office. To incorporate lessons learned more broadly across organizations, procurement officials at HHS, DHS, and NASA components noted that they disseminate issues and best practices that arise across their organizations. For instance, at both HHS components we reviewed, procurement officials collect and post lists of substantive issues that arise from their contract review process and bid protest decisions via intranet or newsletter. According to one contracting official, this mechanism may help inform contracting officers of typical pitfalls in acquisition planning. Similarly, at NASA’s Goddard Space Flight Center and Johnson Space Center, procurement staff members document substantive issues that arise in the Source Evaluation Board process for use in future and related contracts. In addition, Federal Emergency Management Agency procurement officials told us they maintain guides for the most important hurricane-related contracts to ensure that lessons learned are tracked and continually applied to help ensure a quick response during a disaster. Some agency components have also taken steps to encourage early acquisition planning, including instituting annual consultations about anticipated contracts and reminder systems about expiring contracts. Officials at Centers for Disease Control and Prevention described processes linking the initiation of planning for individual contracts into annual processes for strategic organizational acquisition planning, including meetings between programs and contracting officials at the beginning of each fiscal year. Further, Federal Emergency Management Agency has recently implemented a policy that reserves the option for the Chief Financial Officer to set aside an acquisition’s approved funding for other requirements when programs do not meet deadlines intended to ensure timely contract award. Most agency components have established expected time frames for the last phase of acquisition planning—beginning when the program and contracting offices finalize a request for contract package. However, none of the agency components have measured and described in guidance the time needed for program offices to develop and obtain approvals of key acquisition planning documents—including statements of work, cost estimates, and written acquisition plans, if required—during the pre- solicitation phase, which serves as the foundation for the acquisition process. Agencies have also not measured the time needed during the procurement request phase to finalize these documents in collaboration with contracting offices. Most agency components in our review have measured and established guidance about expected time frames for the last phase of the acquisition planning process—the solicitation phase which starts when the request for contract package is complete—and the contract award phase. These expected time frames, known as contracting lead times, consider variability such as level of competition, estimated contract value, and commercial availability; serve as typical internal deadlines for contracting offices; and provide program offices with information about contract processing times (see table 5). Contracting lead times established in guidance varied greatly among the agency components in our review. For instance, contracting lead times in Federal Emergency Management Agency’s guidance varied from 30 days for orders under existing agreements and contracts to 210 days for new non-commercial contracts that have an estimated value of $50 million or more. Similarly, contracting lead times in Goddard Space Flight Center’s guidance ranged from 17 days for certain contract actions under $25,000 using simplified procedures to nearly 300 days for competitive contracts over $50 million. Johnson Space Center has not established contracting lead times, but officials have established general time frames for steps in the contract award phase. At Customs and Border Protection, officials noted that they have measured the time frames needed to establish contracting lead times and are currently working to implement them in guidance. According to agency component officials, contracting lead times were developed in a variety of ways, including compiling historical data of procurements, experience gained from past procurements, information gathered through acquisition working groups, and benchmarking with other agencies. Agency components in our review have not measured or incorporated into their guidance the time needed for activities performed in the pre- solicitation phase of acquisition planning—when program officials develop key acquisition planning documents—or the procurement request phase when these documents are revised and completed in collaboration with contracting officials. The time needed for pre-solicitation activities varies depending on the complexity and dollar value of the contract. The pre- solicitation phase of acquisition planning serves as the foundation of the acquisition process (see figure 3): It is when program offices establish the need for a contract; develop key acquisition documents such as the requirements document, the cost estimate, and, if required, the acquisition plan; and obtain the necessary review and approvals, before submitting a request for contract to the contracting office. Based on discussions with program officials and the contract documents we reviewed, the average pre-solicitation phase accounted for roughly half of the total time estimated for acquisition planning activities in our selected cases. Unlike the other agency components we reviewed, Johnson Space Center has measured the time needed for pre-solicitation activities as part of an effort to streamline their acquisition processes, but has yet to establish these time frames in guidance. We found that the time needed to complete pre-solicitation activities for our selected contracts varied widely from less than 1 month to more than 2 years and depended on factors such as complexity of the requirement, political sensitivity, and funding. This variability is similar to the variability agencies have measured for the last phases in establishing contracting lead times, as illustrated in these examples. For an $18 million HHS contract to obtain information technology support, developing key acquisition documents and other pre- solicitation activities took about 27 months. Program officials noted that the pre-solicitation phase was lengthy because the requirements for this contract were complex and the requirements document had to be refined several times by agency stakeholders. For a $125 million DHS agreement to provide background investigation services, agency officials said that pre-solicitation activities took about 8 months to complete. According to agency officials and acquisition planning documents, the contract was politically sensitive because the contract supported increased hiring of personnel who require security clearances to meet congressional mandates. Additionally, given the cost, complexity, and sensitivity of the contract, program officials were required to obtain additional review and approvals from their agency’s chief counsel and head of the procurement activity. Pre-solicitation activities for a $421,435 HHS contract to provide biosafety laboratory support took about 1 month to complete. A program official explained that she received funding late in the fiscal year and had limited time to conduct pre-solicitation activities. The program official noted that the requirements for this contract were complex and she would have wanted at least twice as long to complete the process. We have previously reported that contracting officials stated that program officials were often insufficiently aware of the amount of time needed to complete acquisition planning, including properly defining requirements, which may have hindered opportunities to increase competition. For a DHS contract we reviewed valued at up to $375 million to provide disaster relief, the program manager noted that he had not known how long reviews of the written acquisition plan by DHS headquarters would take. Because the program office did not factor in enough time for this review process, among other steps, award of the contract was delayed by about 2 months. To avoid a gap in services, DHS awarded a bridge contract that extended the length of the original contract. Sound acquisition planning is important to establishing a strong foundation for successful outcomes for the billions of dollars civilian agencies spend annually acquiring services. Key acquisition planning elements—including written acquisition plans, requirements development, cost estimating, and incorporating lessons learned—are critical to the process, as is allowing sufficient time to conduct acquisition planning. Other than USAID, the agencies we reviewed currently require written acquisition plans that align closely with the elements described in the FAR, and agency policy and contracting officials acknowledge the benefits such plans provide, including helping to clearly define requirements, understand costs, and carry forward any lessons learned. Still, agencies did not always take full advantage of the acquisition planning process to develop a strong foundation for the acquisitions we reviewed. In particular, cost estimating and incorporating lessons learned from previous contracts are not always viewed as important elements of the acquisition planning process. Moreover, agencies varied in how they documented rationales for cost estimates prepared during acquisition planning and any lessons learned, which limits the availability of such information for future use. In addition, agencies have acknowledged the value of developing contracting lead times—how long acquisition planning activities leading up to a complete procurement request take to move from a complete procurement request to contract award—that recognize the variability of time required for different types of contracts. However, how long the acquisition planning activities leading up to a complete procurement request take is not as well defined. Without a clear understanding of the time frames needed for the acquisition planning process, program officials may not know when to start planning or how long the planning will take, increasing the likelihood of poorly prepared documents and contract delays. Better insights into when acquisition planning should begin would help allow sufficient time to carry out the important acquisition planning activities that are designed to facilitate more successful outcomes. To promote improved acquisition planning, we recommend that the Administrator of USAID direct the Office of Acquisition and Assistance to establish requirements specifying dollar thresholds for when written plans should be developed, documented, and approved; establish standard acquisition plan formats that align with the FAR; develop templates and guidance to help program officials prepare reliable cost estimates. To take fuller advantage of important acquisition planning elements and to ensure that information is available for future use, we recommend that the Secretaries of HHS and DHS and the Administrators of NASA and USAID direct their procurement offices to ensure that agency and component guidance clearly define the role of cost estimating and incorporating lessons learned in acquisition planning, as well as specific requirements for what should be included in documenting these elements in the contract file. To allow sufficient time for acquisition planning, we recommend that the Secretaries of HHS and DHS and the Administrators of NASA and USAID direct their components’ procurement offices to collect information about the time frames needed for pre-solicitation acquisition planning activities to establish time frames for when program officials should begin acquisition planning. We provided a draft of this report to DHS, HHS, NASA, and USAID. DHS, NASA, and USAID provided written comments stating that they concurred with our recommendations to promote improved acquisition planning by taking fuller advantage of important acquisition planning elements, including clearly defining the role of cost estimating and incorporating lessons learned. The agencies’ views differed on our recommendation to collect information about time frames needed for the acquisition planning process. The agency comments are discussed below and included in appendixes VI, VII, and VIII, respectively. HHS had no comments on the draft of this report. DHS also provided technical comments, which we incorporated as appropriate. USAID concurred with our recommendation to promote improved acquisition planning. USAID noted in its comments that the agency needs to develop more formal, comprehensive policy and procedures for acquisition planning by specifying dollar thresholds for written acquisition plans and establishing standard acquisition plan formats to fully meet FAR requirements for acquisition planning. USAID also stated that it plans to develop templates and guidance to help program officials prepare reliable cost estimates. DHS, NASA, and USAID all concurred with our recommendation to ensure that agency and component guidance clearly define the role of cost estimating and incorporation of lessons learned in acquisition planning and the associated documentation requirements. USAID did not indicate specific actions the agency will take to implement this recommendation. NASA noted that it plans to require acquisition plans to fully document the rationale for cost estimates. In its comments, DHS described existing guidance and training related to independent government cost estimates and stated its intention to review its regulations and guidance in accordance with our recommendation. In doing so, it is important that guidance define the role of cost estimates specifically for acquisition planning purposes, which could include making affordability and requirements tradeoffs. This role may differ from the purpose of an independent government cost estimate developed later in the acquisition process. In addition, DHS, NASA, and USAID agreed that they should define the role of lessons learned in the acquisition planning process, as well as establish documentation requirements. In June 2011, DHS updated its acquisition planning guidance to specifically include the incorporation of lessons learned in acquisition planning discussions. In its comments, NASA stated that it intends to require acquisition plans to include lessons learned from earlier contract actions and steps to mitigate these issues. DHS, NASA, and USAID responses to our recommendation to collect information about the time frames needed for the acquisition planning process to establish time frames for program officials varied. USAID concurred with our recommendation but, in its comments, did not describe specific actions the agency plans to take in response. NASA partially concurred, but did not agree that the procurement offices should establish time frames for program officials’ planning, because the time frames will differ across programs. However, we found that, in 2009, NASA’s Johnson Space Center procurement office was able to analyze the time taken for steps of the pre-solicitation phase, recognizing that this phase historically had the greatest effect on acquisition schedules at the component. DHS did not concur, commenting that it did not believe it is necessary or an efficient use of resources to address the recommendation because existing regulations and policy already state that acquisition planning should begin as soon as the need is identified. DHS noted that it recently updated acquisition planning guidance to emphasize the need to begin acquisition planning early, including paying close attention to procurement administrative lead times and early formation of integrated product teams. We found that program officials need more guidance to have a better understanding of how much time to allow for completing fundamental acquisition planning steps in a high- quality manner. Agencies’ procurement administrative lead times begin when a procurement request package has been completed, but agencies have not measured and established in guidance the time frames for the acquisition planning activities that lead up to a complete procurement request package. These early activities include conducting market research; defining requirements; developing the statement of work, cost estimate, procurement request, and written acquisition plan, if required; and obtaining approvals for these documents as necessary. We believe that component procurement offices are best positioned to aggregate information about historical planning time frames, particularly given the variation across contract actions, and provide programs with guidance on how long aspects of the planning process may take. Agency components have been successful in capturing variation in contract characteristics such as type of contract action, level of competition, and estimated value in the contracting lead times they have set for the last phase of acquisition planning, and we believe they can accomplish similar analysis for the variation in the early phases. In response to DHS and NASA comments, we clarified this recommendation to emphasize the importance of establishing time frames for pre-solicitation activities. We are sending copies of this report to interested congressional committees and the Secretaries of HHS and DHS, and the Administrators of NASA and USAID. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. Should you or your staff have any questions on the matters covered in this report, please contact me at (202) 512-4841 or huttonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. The objectives of this review were to assess (1) the extent to which agencies have developed policies and procedures related to acquisition planning; (2) how agencies have carried out acquisition planning in the areas of requirements definition, cost estimates, and lesson learned in selected cases; and (3) the extent to which agencies’ guidance provides time frames for when to begin and how long acquisition planning activities should take. We focused our work on: selected elements of acquisition planning including: written acquisition plans, requirements development, cost estimating, incorporation of lessons learned, and guidance for acquisition planning time frames.We chose these elements because they are critical to the successful planning of a contract. acquisition planning for professional, administrative, and management support services contracts because these types of services had the highest obligations by civilian agencies in fiscal year 2009 as reported in the government’s procurement database—the Federal Procurement Data System-Next Generation (FPDS-NG). four civilian agencies that obligated the most on these types of services in fiscal year 2009: the Department of Health and Human Services (HHS), the Department of Homeland Security (DHS), National Aeronautics and Space Administration (NASA), and the U.S. Agency for International Development (USAID). We chose to focus on civilian agencies and exclude the Department of Defense agencies because GAO had issued a number of reports in recent years that addressed elements of acquisition planning at the Department of Defense. To determine the extent to which agencies have developed policies related to selected elements of acquisition planning, we reviewed FAR provisions in effect in fiscal year 2009 pertaining to acquisition planning and agency regulations and guidance. In particular, we compared the agencies’ current policies to the FAR, which prescribes responsibilities of agency heads, or their designees, related to acquisition planning. We also interviewed agency procurement executives and policy officials, component-level procurement policy officials, and the competition advocate at the agency or component levels to determine the agency rationale for establishing agency policy and procedures and obtain agency opinion on whether current policy meets acquisition planning requirements in the FAR. To determine how agencies have carried out acquisition planning in the areas of requirements definition, cost estimating, and lessons learned in selected cases, we reviewed a selection of contracts at each agency and interviewed cognizant contracting officials, and program officials as mentioned above. We specifically inquired about effective practices with regard to the selected elements of acquisition planning. To identify contracts, we selected two components with the highest obligations on professional, administrative, and management support services during fiscal year 2009 from each of the four selected agencies. Our selection of contracts included 24 contracts, 6 from each agency. The specif agency component locations in our review were as follows: HHS’s Centers for Disease Control and Prevention and National DHS’s Customs and Border Protection and Federal Emergency NASA’s Goddard Space Flight Center and Johnson Space Center; and USAID’s Washington D.C. Office of Acquisition and Assistance, including services provided for the Democracy, Conflict, and Humanitarian Assistance Division, as well as agencywide. Within each selected component we selected three contracts awarded by that component’s contracting office in fiscal year 2008 and fiscal year 2009.Our criteria for contract selection included three tiers based on dollar value and review thresholds set at the time of award by the agencies: (1) one contract with a written acquisition plan that required agency headquarters level review; (2) one contract that required a written acquisition plan but did not require agency headquarters review; and (3) one contract that did not meet the threshold for written acquisition plan but was above the simplified acquisition threshold at the time of this review of $100,000. We also included a mix of new and follow-on requirements as well as competed and noncompeted contracts. The reliability of the FPDS-NG data retrieved for our contract selection was assessed and validated using source information (contract identification numbers, contract value, the extent of competition, and the award date) from the selected contract documents. Our results from the analysis of these contracts are not generalizable because we did not use a representative, random sample, though they do illustrate examples of impediments to effective acquisition planning and factors contributing to successful acquisition planning. Because of the limited number of professional, administrative, and management support services contracts in each of our tiers of selection, we were not able to use random sampling for selection. To determine the extent to which agencies’ guidance provides time frames for when to begin and how long acquisition planning activities should take, we reviewed policies and procedures on timing at each agency and its components. To determine the time needed for acquisition planning for the selected contracts, we interviewed cognizant contracting officials and program officials, when available, involved when planning began, reviewed contract files documents for key acquisition planning milestones, and calculated the time between these dates. We conducted this performance audit from May 2010 to August 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. INSTRUCTIONS: click here to return to figure 1. INSTRUCTIONS: click here to return to figure 1. INSTRUCTIONS: click here to return to figure 1. No (none required) No (none required) Yes (none required) No (none required) Yes (none required) No (none required) No (none required) No (none required) No (none required) No (none required) No (none required) cri whether or not the ction used competitive procedre. Appendix VIII: Comments from the U.S. Agency for International Development John P. Hutton Director, Acquisition and Sourcing Management U.S. Government Accountability Office Washington, DC 20548 I am pleased to provide the U.S. Agency for International Development’s formal response to the GAO draft report entitled: “Acquisition Planning: Opportunities to Build Strong Foundations for Better Services Contracts” (GAO-11-672). The enclosed USAID comments are provided for incorporation with this letter as an appendix to the final report. Thank you for the opportunity to respond to the GAO draft report and for the courtesies extended by your staff in the conduct of this audit review. “Acquisition Planning: Opportunities to Build Strong Foundations for Better Services Contracts” (GAO-11-672) Recommendation 1: We recommend that the Administrator of USAID direct the Office of Acquisition and Assistance to establish requirements specifying dollar thresholds for when written plans should be developed, documented, and approved. Management Comments: USAID concurs with GAO’s recommendation to specify thresholds and clarify procedures for written acquisition plans. USAID has historically captured some elements of acquisition planning in the development of program/project planning and approval documents, but the different policy sections in the Automated Directive System (ADS) and the USAID Acquisition Regulation (AIDAR) are not fully adequate to meet Federal Acquisition Regulations (FAR) requirements for acquisition planning. We recognize that the Agency would benefit from more formal, comprehensive policy and procedures that address these FAR requirements. Recommendation 2: We recommend that the Administrator of USAID direct the Office of Acquisition and Assistance to establish standard acquisition plan formats that align with the FAR. Management Comments: USAID concurs. USAID is currently in the process of developing a consistent format that will be utilized in FY2012. Recommendation 3: We recommend that the Administrator of USAID direct the Office of Acquisition and Assistance to develop templates and guidance to help program officials prepare reliable cost estimates. Management Comments: USAID concurs. USAID includes as part of its training of Contracting Officer’s Technical Representatives (COTRs) the requirements of undertaking independent cost estimates. USAID has established a number of areas for enhancements under USAID Forward and Implementation and Procurement Reform, and will ensure that enhanced templates and guidance on cost estimates are undertaken to have a consistent source of information for program officials. Recommendation 4: We recommend that the Administrator of USAID direct the Office of Acquisition and Assistance to take fuller advantage of important acquisition planning elements and to ensure that information is available for future use. Management Comments: USAID concurs. As a part of Implementation and Procurement Reform, as mentioned above, and as a result of an internal USAID Business Process Review in 2010, USAID has identified several important elements of acquisition planning on which we plan to place further emphasis in Agency guidance. USAID is pursuing implementation of recommendations in these areas for incorporation in acquisition planning for FY 2012. Recommendation 5: We recommend that the Administrator of USAID direct the Office of Acquisition and Assistance to ensure that agency and component guidance clearly define the role of cost estimating and the incorporation of lessons learned in acquisition planning, as well as specific requirements for what should be included in documentation of these elements in the contract file. Management Comments: USAID concurs with GAO’s recommendation to clearly define roles, incorporate lessons learned, and document acquisition planning elements. USAID views effective acquisition planning as critical to timely and successful launching of development assistance programs throughout the world. Some elements of acquisition planning are part of USAID’s activity authorization policies, some other elements are contained within different Agency policy references, and there are remaining aspects that need to be clarified as part of our policies to reflect all required elements. USAID will also ensure that its guidance addresses requirements for file documentation. Recommendation 6: We recommend that the Administrator of USAID direct the Office of Acquisition and Assistance to allow sufficient time for acquisition planning and direct their components’ procurement offices to collect information about the timeframes needed for the acquisition planning process—including the pre-solicitation phase—and use the information to establish timeframes for when program officials should begin acquisition planning. Management Comments: USAID concurs with GAO’s recommendation to direct timeframes needed for acquisition planning, include the pre-solicitation phase, and establish timeframes for program officials. As noted above, USAID undertook a Business Process Review in 2010 to gather specific timeframes experienced in USAID/Washington and field Missions for competitive acquisition and assistance. USAID recognizes that effective procurement planning also includes the pre-solicitation phase. A key element within Implementation and Procurement Reform is to enhance competitive processes which include the ability to undertake more robust competitions within reduced timelines while concurrently ensuring compliance with applicable regulations. In addition to the contact named above, the following individuals made key contributions to this report: Penny Berrier (Assistant Director); Morgan Delaney Ramaker; Alexandra Dew Silva; Meghan Hardy; Julia Kennon; Anne McDonough-Hughes; Amy Moran Lowe; Ramzi Nemo; Kenneth Patton; Guisseli Reyes-Turnell; Roxanna Sun; Ann Marie Udale; and Alyssa Weir. | Civilian agencies obligated over $135 billion in fiscal year 2010 for services --80 percent of total civilian spending on contracts. Services acquisitions have suffered from inadequate planning, which can put budget, schedule, and quality at risk. GAO was asked to examine how civilian agencies conduct acquisition planning for services contracts and assessed (1) the extent to which agencies have developed policies and procedures for acquisition planning, (2) how agencies have carried out acquisition planning, and (3) the extent to which agencies' guidance identifies when to begin and how long acquisition planning should take. GAO reviewed acquisition planning at the four civilian agencies with the most spending on professional, administrative, and management support services. GAO also reviewed Federal Acquisition Regulation (FAR) provisions; agency regulations and guidance; and 24 selected contracts; and interviewed agency officials. The Departments of Health and Human Services (HHS) and Homeland Security (DHS), the National Aeronautics and Space Administration (NASA), and the U.S. Agency for International Development (USAID) have established policies that set different requirements and levels of oversight for acquisition planning. Acquisition planning elements--including written acquisition plans, requirements development, cost estimation, and incorporation of lessons learned--are critical to the process. HHS, DHS, and NASA require written acquisition plans that align closely with elements defined in the FAR--USAID does not. All four agencies' guidance include preparing cost estimates and requirements documents during acquisition planning, and DHS and NASA guidance include the consideration of lessons learned from previous contracts in acquisition planning. Agencies' requirements for oversight vary, including who reviews and approves acquisition planning documents. Agencies did not always take full advantage of acquisition planning to develop a strong foundation for the contracts GAO reviewed, but some have identified ways to encourage improved acquisition planning. Key planning steps were not performed, could have been better used to improve acquisition planning, or were not documented for future use. In particular, GAO found that agencies faced challenges defining their needs, documented cost estimates to varying degrees, and documented lessons learned to a limited extent. GAO identified several practices agencies use to support program staff with acquisition planning, including hiring personnel who specialize in procurement business issues and cost and price analysis and providing templates to assist in preparing key documents. Most agency components have established time frames for the last phase of acquisition planning--beginning when the program and contracting offices finalize a request for contract package. None of the agency components, however, have measured and provided guidance on the time frames needed for program offices to develop and obtain approvals of key acquisition planning documents during the pre-solicitation phase--which serves as the foundation for the acquisition process--or to finalize these documents in collaboration with contracting offices during the procurement request phase. GAO recommends that USAID establish requirements for written acquisition plans and that each agency enhance guidance for cost estimating and lessons learned; DHS, NASA, and USAID concurred. GAO also recommends that each agency establish time frames for pre-solicitation activities. NASA and USAID generally concurred but DHS did not, noting that existing policy states that planning should begin as soon as a need is identified. GAO clarified its recommendation to emphasize pre-solicitation planning activities. HHS had no comments. |
In the summer of 2005, DNDO tested ASPs from 10 vendors to evaluate their performance capabilities and to select the ASPs that warranted further development and possible procurement. In July 2006, DNDO awarded contracts totaling $1.2 billion over fiveyears to three vendors— Raytheon, Canberra, and Thermo. The Department of Homeland Security Appropriations Act for Fiscal Year 2007 states that “none of the funds appropriated … shall be obligated for full scale procurement of monitors until the Secretary of Homeland Security has certified … that a significant increase in operational effectiveness will be achieved.” Congress enacted a similar requirement in DHS’s fiscal year 2008 appropriation. In hopes of obtaining secretarial certification by June 2007, DNDO tested ASPs at several sites, including the Nevada Test Site, the New York Container Terminal, the Pacific Northwest National Laboratory, and five ports of entry. DNDO conducted the tests at NTS in two phases. DNDO stated that the Phase 1 tests, performed in February-March 2007, attempted to estimate the performance capabilities of the ASPs with a high degree of statistical confidence. DNDO intended these tests to support the Secretary’s decision on whether to certify the ASPs for the purposes of a full-scale production decision, while the Phase 3 tests were intended to help improve the computer algorithms that the ASPs use to identify the specific radiological or nuclear source inside a container. On September 18, 2007, we testified that DNDO’s Phase 1 tests did not constitute an objective and rigorous assessment of the ASPs’ capabilities because, among other things, DNDO conducted preliminary test runs on source materials to be used in the tests, and then allowed the vendors to adjust their ASPs to specifically identify the source materials to be tested. We testified that in our view, DNDO’s approach biased the tests in ways that enhanced the apparent performance of the ASPs. We also noted that the tests did not attempt to estimate the limits of ASPs’ detection abilities—an important concern to those who will use them such as CBP officers. During that hearing, DNDO’s Director stated that, contrary to statements DNDO made in its final Phase 3 test plan, DNDO would use the Phase 3 test results to help support the Secretary’s decision on whether to certify the ASPs for full-scale production. Subsequently, DNDO delayed its anticipated date for secretarial certification to the fall of 2008 in order to conduct additional performance tests and field tests during fiscal year 2008. Because the limitations of the Phase 3 test results are not properly discussed in the Phase 3 test report, the report does not accurately portray the results from the Phase 3 tests and could be misleading. The purpose of the Phase 3 tests was to identify areas in which the ASPs needed improvement. While some of the Phase 3 report addresses this purpose, much of the report compares the performance of the ASPs with each other or with PVTs and RIIDs during the tests. However, because DNDO performed each test a limited number of times, the data it used to make some of these comparisons provide little information about the actual capabilities of the ASPs. The narrative of the report often presents each test result as a single value, although, because of the limited number of test runs, the results would be more thoroughly and appropriately stated as a range of potential values. In addition, the report’s narrative sometimes omits key facts that conflict with DNDO’s analysis of the results. “The primary goals of the testing were to provide information by giving the ASP systems an opportunity to perform against a wider range of radionuclides, shielding, and cargo configurations. To allow for more substantial variation among test cases, the test plan called for a smaller number of trials over a larger range of objects and configurations rather than running sufficient number of repetitions for each individual test case to provide higher statistically significan results.” (p. 2) In these comparisons , results are small sample sizes induce large uncertainties in the estimates of the probabilities being compared [for example: n ≤ 5].” (p.9) “For at 2 mph, the ASP system performances are statistically indistinguishable.” (p.13) “For shielded , performance for all three systems is statistically indistinguishable with probabilities of correct alarm varying approximately between 0.84 and .0.92.” (p.11) The statements imply that the performances of the ASPs were similar because the results were “statistically indistinguishable.” However, given the small number of test runs, it is impossible to determine with a high degree of confidence whether or not the performances were actually similar. Yet the report’s text describing specific results rarely qualifies the results by stating that the test was run only a few times or that the results should not be considered conclusive of the ASPs’ capabilities. “For the source configurations tested, the ASP systems have equal performance down to the lowest source activity tested.” (p. iii) “The PVT systems display lower performance than the ASP systems for and sources.” (p. iv) “When comparing the ASP systems _ mph identification metric with the _ RIID measurements…, it is observed that the RIID performance is poor compared to the ASP systems.” (p. iv) “For bare only, … he probability of correct identification varied between 0.34 and 0.5.” (p.14) However, because each test involved a small number of test runs, these percentages provide little information about the performance capabilities of the ASPs. In fact, because of the small number of test runs, DNDO can only estimate that each ASP can correctly identify the type of source material within a range of values. The fewer the number of test runs, the larger the range. For example, for the ASP that correctly identified the source material 34 percent of the time during the tests, the report text omits the fact that, as shown on an accompanying graph, DNDO can only estimate that the ASP would be able to correctly identify the source between about 10 percent and 65 percent of the time. By stating that the ASP identified the source 34 percent of the time without clarifying that the results came from only a few test runs, the report’s text makes the test results seem more precise than they really are. Similarly, for the ASP that correctly identified the source material 50 percent of the time during the tests, the small number of test runs means that DNDO can only estimate that the ASP would be able to correctly identify the source material between about 15 percent and 85 percent of the time. This range is too wide to have much value in determining how well the ASP may perform in the real world. Although these ranges are clearly shown on the report’s graphs, they are omitted in the report’s descriptions and interpretations of the test results. Similarly, DNDO’s analysis comparing the performances of ASPs and RIIDs fails to consider the uncertainties created by the tests’ small sample sizes. The report states that the RIIDs “performance is poor compared to the ASP systems.” For example, during the tests, one vendor’s ASP correctly identified one type of source material about 50 percent of the time, while the RIIDs correctly identified the same type of source material about 10 percent of the time. However, given the small number of test runs, DNDO cannot be confident that these percentages precisely indicate the performance capabilities of the ASPs and RIIDs. On the basis of the tests, DNDO can only infer that the ASPs’ and RIIDs’ performance capabilities lie somewhere within a relatively large range of values. As these ranges are illustrated in the report’s graphs, it appears that the difference in the performance of the ASPs and RIIDs may not be statistically different for three of the five types of source materials DNDO tested. This does not necessarily mean that the ASPs and RIIDs performed equally well; rather, DNDO did not conduct each test enough times to determine that the superior performance of the ASPs over the RIIDs reflects the capabilities of the ASPs rather than mere chance. “The ASP systems demonstrated detection limits equal to or better than those of any of the PVT systems as configured during testing.” (p.iii) However, the report’s executive summary fails to note that because DNDO used only one type of source material, the results are largely specific to that particular source material and would not necessarily apply to other specific source materials. In fact, for other types of source material, the report shows several instances in which the PVTs were apparently able to detect other types of source materials better than the ASPs. Moreover, other Phase 3 tests showed that simply moving the source material referred to in the above quote to another place in the container affected the relative performances of the ASPs and PVTs. “the ASP systems have the ability to identify sources when placed inside almost all but the thickest shielding configuration tested.” (p.iv) Again, however, DNDO fails to note in its report that, as it explained in its Phase 3 test plan, all the shielding used in the Phase 3 tests represented “light shielding.” The report also fails to state how many specific sources the ASPs could correctly identify or how frequently the ASPs could identify them. In our view, it is not appropriate to use the Phase 3 test report in determining whether the ASPs represent a significant improvement over currently deployed radiation equipment because the limited number of test runs does not support many of the comparisons of ASP performance made in the Phase 3 report. As noted, DNDO’s use of a small number of runs for each test means that DNDO can only be certain that the ASP can correctly identify or detect a source material over a broad range of possible values rather than at a specific rate. This is true even if the ASP was successful every time a test was conducted. For example, as noted in the Phase 3 test report, if the ASP correctly identified a source material 100 percent of the time, but the test was run only five times, the most DNDO can estimate is that the ASP should be able to correctly identify the source no worse than about 60 percent of the time. The Phase 3 test results do not help to determine an ASP’s “true” level of performance because DNDO did not design the tests to assess ASP performance with a high degree of statistical confidence. In the Phase 3 test plan, DNDO was very clear that it had intended the tests to help develop a conduct of operations for secondary screenings and to cover a larger array of source materials and test scenarios than were conducted in the Phase 1 tests. “The Phase 3 test campaign was not originally intended to support the Secretarial Certification of the ASP systems. However, the test results provide relevant insights into important aspects of system performance and should be taken into consideration by the Secretary of Homeland Security in making his (ASP procurement) decision.” (p.iii) It is important to note that DNDO does not elaborate in the test report as to what the “relevant insights” are or how they relate to Secretarial certification. DNDO also does not explain why those insights would be relevant considering that, as stated in the Phase 3 test plan, the results from the tests lack a high degree of statistical significance. Finally, it should be noted that when the Director of DNDO testified in September 2007 that the Phase 3 test results would help inform the Secretary’s recommendation, he also acknowledged that the Phase 3 test report had not yet been prepared. The special tests were performed by experts from Sandia National Laboratories who were not part of the Phase 1 or Phase 3 tests. The special tests were designed to examine potential vulnerabilities associated with either the ASPs or the Phase 1 or Phase 3 test plan and vulnerabilities in DNDO’s test processes. Conducting this type of test would allow the ASP vendors the opportunity to make improvements to their systems in order to address weaknesses revealed during the special tests. Like the Phase 3 tests, the special tests used a small number of runs for each testing scenario. Because of the small number of runs, the test results do not support estimating the probability of detection or identification with a high confidence level making it difficult to use the results of the special tests to support a certification decision by the Secretary of DHS. On this point, the special test report acknowledges that “the special tests were never intended to demonstrate conformity of the systems against specific performance requirements.” From the special tests, SNL concluded 1. “Areas for software and hardware improvement have been identified based on system performance issues observed for the versions of the ASP hardware and software tested at the NTS during Winter 2007.” 2. “For the data made available to us, the reported results … are consistent with the underlying collected data—indicating that the DNDO ASP system assessment was not biased.” 3. “Recommendations to improve the testing rigor have been made…(noting that) their implementation must be balanced against other test campaign impacts (such as) cost, schedule, availability of resources, etc.,” and 4. “Based on our limited tests we observed no data suggesting that the ASP system performance was inappropriately manipulated by either the vendors or the test team.” Overall, the special test report appears to accurately describe the purpose, limitations, and results of the special tests. In our view, DNDO should consider SNL’s views as it proceeds with additional ASP testing in 2008. It is important to note, however, in Sandia’s conclusions that the “ASP system assessment was not biased” and that it “observed no data suggesting that the ASP system performance was inappropriately manipulated,” Sandia is referring to the data derived from ASP tests. However, SNL does not comment on the biased testing methods we identified during the Phase 1 ASP tests at the Nevada Test Site in 2007. Specifically, when we stated in September 2007 that DNDO’s Phase 1 tests were biased, we were referring to DNDO’s test methods which (1) used the same test sources and shielding materials during preliminary runs as were used during the actual tests and (2) did not use standard CBP operating procedures in testing the RIIDs. Preventing the material for a nuclear weapon or a radiological dispersal device from being smuggled into the United States remains a key national security priority. Testing radiation detection equipment to understand its capabilities and limitations is an important part of preventing nuclear smuggling. The Phase 3 and special tests were part of DNDO’s 2007 effort to test ASPs in order to identify areas for further development to these devices. The Phase 3 test results are relevant to DNDO’s original objective for the Phase 3 tests—to identify areas in which the ASPs needed improvement. However, because of the limitations of the tests, DNDO should not be using the test results as indicators of the overall performance capabilities of the ASPs. Moreover, in the Phase 3 report, DNDO presented and analyzed the test results without fully disclosing key limitations of the tests, which is not consistent with basic principles of statistics and data analysis. Because of this, many of the report’s presentations and comparisons of performance among ASPs and between ASPs and PVTs are not well supported and are potentially misleading. Regarding the special tests, SNL notes in its test report that it designed the tests to identify areas where the ASPs need to improve—not to measure the ASPs performance against requirements. Overall, because of the limitations discussed in this report, it is our view that neither the Phase 3 tests nor the special tests should serve as a basis for the Secretary of DHS whether the ASPs represent “a significant increase in operational effectiveness” over current radiation detection equipment. To ensure that the limitations of the Phase 3 test results, and future ASP test results, are clearly understood, we are making the following four recommendations. We recommend that the Secretary of DHS use the results of the Phase 3 tests solely for the purposes for which they were intended—to identify areas needing improvement, not as a justification for certifying whether the ASPs warrant full-scale production. However, if the Secretary of DHS intends to consider the results of the Phase 3 tests, along with other test data information, in making a certification decision regarding ASPs, then we recommend that the Secretary take the following actions: Direct the Director of DNDO to revise and clarify the Phase 3 test report to more fully disclose and articulate the limitations present in the Phase 3 tests—particularly the limitations associated with making comparisons between detection systems from a small number of test runs. Clearly state which “relevant insights into important aspects of system performance” from the Phase 3 report are factored into any decision regarding the certification that ASPs demonstrate a significant increase in operational effectiveness. Finally, we further recommend that since there are several phases of additional ASP testing currently ongoing, the Secretary should direct the Director of DNDO take steps to ensure that any limitations associated with ongoing testing are properly disclosed when the results of the current testing are reported. We provided DHS with a draft of this report for its review and comment. Its written comments are presented in appendix I. The department stated that it strongly disagreed with our draft report and two of our report’s recommendations. DHS agreed to take some action on a third recommendation and offered no comments on a fourth recommendation. The department stated several reasons for its disagreement. First, DHS cites narrative from the Phase 3 report explaining that the Phase 3 tests employed fewer test runs per test so as “to allow for more substantial variation among test cases” rather than “running sufficient number of repetitions … to provide high statistical significance results.” Thus, in DHS’s view, our assertion that the report does not “fully disclose” the Phase 3 tests’ limitations concerning the statistical significance of the results is incorrect. Our draft report recognizes DHS’s description of how the Phase 3 tests were conducted. Our concern is that although DNDO cited the limited statistical significance of the test results at the outset of the Phase 3 report, DNDO’s findings do not reflect this limitation. For example, as we note in our draft report, the Phase 3 report repeatedly states that the performances of the various ASPs were “statistically indistinguishable” even though DNDO did not perform enough test runs to estimate with a high degree of confidence whether the performances were actually similar. DNDO presents many of its findings as conclusive statements about ASP performance despite the fact that the Phase 3 test design cannot support these findings. Second, the department commented that the Phase 3 test report clearly and succinctly stated another limitation of the test methodology— specifically, that the tests were not designed to be a precise indicator of ASP performance. In the department’s view, noting this limitation throughout the Phase 3 report would have been unwieldy. We did not expect DNDO to repeat this limitation throughout the report. However, as suggested in our report, the Phase 3 report should accurately reflect the test results without portraying the results as being more precise than they really are. Using an example from the Phase 3 report, if DNDO notes that an ASP successfully identified a specific source material 34 percent of the time during the tests, it should also indicate that, given the small number of test runs, DNDO can only estimate that the ASP would be able to correctly identify the specific source material between 10 and 65 percent of the time. However, no such discussion of the wide range of potential results is included in the report’s narrative. In our view, presenting the test results without sufficient narrative about the tests’ limitations is potentially misleading. Third, the department stated that although the Phase 3 tests were not intended to support the DHS Secretary’s certification decision, DHS decided that it needed to consider all available test results in making this decision. DHS further commented that not doing so would subject it to criticism of “cherry-picking” the results. In response, although we acknowledge the need to consider all available test results, we believe they should be considered in their appropriate context, and that test results do not all carry the same weight. In our view, test results with a high degree of statistical significance (i.e., unlikely to be the result of chance) should be considered a better indicator of ASP performance than those with a lower level of statistical significance. Because the Phase 3 tests involved only 1-10 runs per test, very few of the results can be generalized as reliable estimates of how the ASPs perform and thus potentially provide questionable evidence for the certification process. We also note that, in its comments, DHS did not address what Phase 3 results or important insights it considered to be relevant to Secretarial certification. Fourth, DHS comments that our draft report failed “to acknowledge the depth and breadth of the ASP test campaign, which is by far the most comprehensive test campaign ever conducted on radiation detection equipment.” However, our report describes previous ASP testing and some of our prior findings about that testing, and notes that ASP testing continues in 2008. More importantly, the extent of testing is not the issue at hand. In our view, regardless of how many tests are performed, the tests must employ sound, unbiased methodologies and DNDO should draw and present conclusions from the test results in ways that accurately and fully reflect the data and disclose their limitations. DHS stated that it disagreed with our recommendations to (1) use the Phase 3 test’s to identify areas needing improvement and not as a basis for certification and (2) revise and clarify the Phase 3 report to reflect the limitations in the tests’ methodology and results. It did not offer comments on our recommendation that the Secretary clearly state what relevant insights from the Phase 3 report are factored into any certification decision. We continue to believe that the Phase 3 tests should be used only for the intended purpose stated in its test plan—to improve the software of ASPs. We would also note that our draft report recommends that DNDO revise and clarify the Phase 3 test report only if it includes Phase 3 test results among the data that will be presented to the Secretary prior to his decision on certification. If DNDO chooses to use the Phase 3 test results for certification, we believe it is important that DNDO explain what test results are relevant to certification and why the value of those results are not mitigated by the limitations associated with the Phase 3 tests’ small sample sizes. In response to our last recommendation, the department stated that it has taken and will continue to take steps to ensure that it properly discloses any limitations associated with ongoing testing as it moves toward secretarial certification of the ASPs. As agreed with your offices, unless you publicly announce the contents of this report, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of DHS and interested congressional committees. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix II. In addition to the contact named above, Ned Woodward, Assistant Director; James Ashley, Nabajyoti Barkakati, Carol Kolarik, Omari Norman, Alison O’Neill, Anna Maria Ortiz, Daren Sweeney, Michelle Treistman, and Gene Wisnoski made significant contributions to this report. | The Department of Homeland Security's (DHS) Domestic Nuclear Detection Office (DNDO) is responsible for addressing the threat of nuclear smuggling. Radiation detection portal monitors are part of the U.S. defense against such threats. In 2007, Congress required that funds for new advanced spectroscopic portal (ASP) monitors could not be spent until the Secretary of DHS certified that these machines represented a significant increase in operational effectiveness over currently deployed portal monitors. In addition to other tests, DNDO conducted the Phase 3 tests on ASPs to identify areas in which the ASPs needed improvement. GAO was asked to assess (1) the degree to which the Phase 3 test report accurately depicts the test results and (2) the appropriateness of using the Phase 3 test results to determine whether ASPs represent a significant improvement over current radiation detection equipment. GAO also agreed to provide its observations on special tests conducted by Sandia National Laboratories (SNL). Because the limitations of the Phase 3 test results are not appropriately stated in the Phase 3 test report, the report does not accurately depict the results from the tests and could potentially be misleading. In the Phase 3 tests, DNDO performed a limited number of test runs. Because of this, the test results provide little information about the actual performance capabilities of the ASPs. The report often presents each test result as a single value; but considering the limited number of test runs, the results would be more appropriately stated as a range of potential values. For example, the report narrative states in one instance that an ASP could identify a source material during a test 50 percent of the time. However, the narrative does not disclose that, given the limited number of test runs, DNDO can only estimate that the ASP would correctly identify the source from about 15 percent to about 85 percent of the time--a result that lacks the precision implied by DNDO's narrative. DNDO's reporting of the test results in this manner makes them appear more conclusive and precise than they really are. The purpose of the Phase 3 tests was to conduct a limited number of test runs in order to identify areas in which the ASP software needed improvement. While aspects of the Phase 3 report address this purpose, the preponderance of the report goes beyond the test's original purpose and makes comparisons of the performance of the ASPs with one another or with currently deployed portal monitors. In GAO's view, it is not appropriate to use the Phase 3 test report in determining whether the ASPs represent a significant improvement over currently deployed radiation equipment because the limited number of test runs do not support many of the comparisons of ASP performance made in the Phase 3 report. As the report shows, if an ASP can identify a source material every time during a test, but the test is run only five times, the only thing that can be inferred with a high level of statistical confidence is that the probability of identification is no less than about 60 percent. Although DNDO states in the Phase 3 test report that the results will be relevant to the Secretary's certification that the ASPs represent a significant increase in operational effectiveness, it does not clarify in what ways the results will be relevant. Furthermore, DNDO offers no explanation as to why it changed its view from the Phase 3 test plan, which states that these tests will not be used to support a certification decision. The goal of SNL's special tests was, among other things, to identify potential vulnerabilities in the ASPs by using different test scenarios from those that DNDO planned to use in other ASP tests. SNL concluded in its test report that the ASPs' software and hardware can be improved and that rigor could be added to DNDO's testing methods. Furthermore, the report acknowledges that (1) a specific objective of the testing at the Nevada Test Site was to refine and improve the ASP's performance and (2) the special tests were never intended to demonstrate conformity of the ASPs with specific performance requirements. In GAO's view, these statements appear to accurately describe the purpose, limitations, and results of the special tests. |
Believing DOD’s efforts to reduce its infrastructure, including the size of its headquarters activities, lagged behind cuts in operational forces, Congress directed DOD to reduce positions in OSD, including Washington Headquarters Services and other defense support activities, by 25 percent from fiscal year 1994 levels by the end of fiscal year 1999. Furthermore, Congress directed DOD to reduce the number of positions in all DOD headquarters activities by 25 percent from fiscal year 1997 levels by the end of fiscal year 2002. Noting that it had already reduced personnel in heardquarters activities starting in fiscal year 1992, DOD believed these cuts would create an unreasonable burden on personnel who perform essential headquarters functions. DOD asked Congress to repeal this provision in March 1998. However, Congress did not rescind it. DOD noted that the congressionally mandated 25-percent reduction, when combined with the Department’s previously programmed reductions, would reduce these components by over 40 percent for the fiscal year 1992-2000 time period. In October 1997, we reported that the number of management headquarters’ personnel and costs were significantly higher than reported by DOD. During fiscal years 1985-96, DOD reported steady decreases in its management headquarters and headquarters support personnel—a 31-percent decline from about 77,000 to 53,000. However, these data did not include personnel at most of DOD’s noncombat organizations that are subordinate to management headquarters. In our review of selected subordinate organizations, nearly three of every four of them were primarily performing management or headquarters support functions and should have been reported to Congress by DOD. We recommended that the Secretary of Defense revise the directive on management headquarters and headquarters support activities to expand its coverage and simplify its criteria. In the November 1997 Defense Reform Initiatives (DRI) report, the Secretary of Defense directed OSD to reduce the number of personnel. To achieve reductions, the Secretary directed that OSD eliminate redundancy and obsolete functions, consolidate related functions, and transfer operational and program management functions to other DOD organizations. In addition, the Secretary directed that OSD’s “hidden staff”—components that directly support OSD but were not included as part of OSD’s formal organizational structure or reported in its personnel strengths—be absorbed into OSD. Finally, the Secretary directed the military departments and their major commands, defense agencies, defense field activities, joint staff, and unified commands to reduce the number of headquarters positions. OSD’s Administrative Instruction 53, “Temporary Staff Augmentations,” states that temporary staff may be used for emergencies, for unforeseen temporary requirements or workload surges, or for jobs for which the skills are not otherwise available within the organization. Temporary staff are not to be used to perform continuing office functions. DOD plans to cut 1,373 positions in OSD and its support activities by the end of fiscal year 1999. These reductions would be 2 percentage points more than the 25-percent cut mandated in the National Defense Authorization Act for Fiscal Year 1997. Although the number of positions has been reduced, civilian salary costs (in constant dollars) have not decreased. In addition, some service and defense agency personnel assigned to OSD are not counted as part of OSD. DOD’s plan to achieve the cuts is shown in table 1. Appendix I shows the changes by office within OSD. As of the end of fiscal year 1998, DOD had cut 1,123 positions, or 82 percent of the planned reductions—858 positions in OSD and 265 positions in Washington Headquarters Services (see table 2). Our analysis indicates that OSD eliminated 402 positions primarily by abolishing vacant positions, reducing the number of overstrength positions, eliminating military positions after the incumbents rotated, and abolishing positions by using early-out incentives. Some positions were also eliminated through organizational changes. For example, the Office of the Under Secretary of Defense for Policy reorganized and eliminated one assistant secretary position as well as several support positions. OSD also reduced the size of its staff by transferring positions to other DOD organizations. For example, in fiscal year 1997, OSD transferred 278 positions from a defense support activity to a DOD field activity. Positions in DOD field activities are not counted as part of OSD. Furthermore, DRI directed that OSD transfer positions that were considered operational or involved in program management to other DOD organizations. In fiscal year 1998, OSD transferred approximately 240 positions to other organizations. The major transfers that occurred during fiscal year 1998 are the following: The Office of the Under Secretary of Defense for Personnel and Readiness transferred 48 positions for managing DOD’s TRICARE medical program to the TRICARE management activity, a DOD field activity. OSD transferred 47 positions supporting various boards and commissions to the Washington Headquarters Services. The Office of the Under Secretary of Defense for Policy transferred 42 positions at the U.S. Mission to the North Atlantic Treaty Organization to the Department of the Army. The Office of the Assistant Secretary of Defense for Public Affairs transferred 28 positions that managed the freedom of information and security review programs to the Washington Headquarters Services. As of the end of fiscal year 1998, DOD had reduced Washington Headquarters Services by a net of 265 positions. Approximately 300 positions were cut by contracting out the Pentagon cleaning service, and another 130 were cut primarily by eliminating vacant positions and positions vacated through early-out incentives. However, these cuts were partly offset by an increase of 165 positions from functions transferred into Washington Headquarters Services, primarily from OSD. To complete the planned cuts, DOD plans to cut 250 positions (202 in OSD and 48 in Washington Headquarters Services) in fiscal year 1999. To achieve its cuts, OSD plans to eliminate 130 positions and transfer another 72 positions. As of November 1998, OSD had identified 32 of the positions to eliminate and told us it had completed 40 of the planned transfers. Washington Headquarters Services plans to cut 72 positions, which will be partly offset by an increase of 24 positions for new missions as well as the expansion of current missions. As of October 1998, Washington Headquarters Services had cut 19 positions by transferring positions in the executive motor pool and travel function to the Department of the Army. According to Washington Headquarters Services officials, the remaining positions will be eliminated by consolidating contracting offices and through early-out incentives. Although about 1,125 positions were eliminated in or transferred from OSD and Washington Headquarters Services between fiscal year 1994 and 1998, there was no proportional decrease in civilian salaries during this time frame. Our analysis indicates that OSD civilian salary costs increased by $7 million (4 percent), from $172 million in fiscal year 1994 to $179 million in fiscal year 1998 (constant 1998 dollars). Likewise, Washington Headquarters Services civilian salary costs increased by $2 million (2 percent) from $84 million to $86 million during this same period. Salary costs did not decline commensurate with the personnel reductions partly because many of the positions eliminated were vacant and annual civilian pay raises exceeded the inflation rate. In addition, OSD incurred some one-time costs associated with incentives to encourage personnel to leave early. For example, DOD paid $1.4 million in such incentives in fiscal year 1998. The military services and defense agencies temporarily assign personnel to OSD that are not counted as part of OSD. While DRI recommended that personnel in the defense support activities and overstrength positions be counted as part of OSD, it did not discuss personnel temporarily assigned to OSD. DOD does not have a central database that identifies such positions or people, known as detailees, but the number could be large. For example, an official from the Office of the Assistant Secretary of Defense for Command, Control, Communications and Intelligence (ASD/C3I) said that 115 people were temporarily assigned to ASD/C3I as of August 1998. The Office of the Under Secretary of Defense for Policy said it had about 26 detailees, and the Office of the Under Secretary of Defense for Acquisition and Technology said it had about 10 detailees. Our review of ASD/C3I information on the 115 detailees’ assignments showed that some of the detailees met the requirements of OSD’s administrative instruction on temporary staff. For example, an ASD/C3I official noted that his office could not recruit temporary staff with the required skills to deal with Year 2000 issues. ASD/C3I was using 20 military service and defense agency people to work on such issues. Another 36 ASD/C3I detailees were either liaisons with the various intelligence agencies or individuals from other agencies on developmental assignments. According to an ASD/C3I official, these positions are permanent within the organization, and personnel occupying these positions change every 2 to 3 years. The remaining 59 positions being filled by detailees appeared to be a permanent part of ASD/C3I. According to an ASD/C3I official, approximately 30 detailees had been assigned to work on specific projects but did not return to their home organizations when the projects ended. For example, the Defense Information Systems Agency sent 17 people to ASD/C3I to support the corporate information management initiative; however, they did not return when the project was terminated. ASD/C3I officials said they were developing a plan to incorporate the policy-related positions into OSD and return the other positions to their parent organizations by fiscal year 2000. The remainder of the detailees were in ASD/C3I’s Defense Airborne Reconnaissance Office. According to an ASD/C3I official, they plan to downsize the office from 27 to 18 and either transfer the function to one of the military departments or make it a permanent part of ASD/C3I. DOD does not have a plan to reduce management headquarters and headquarters support personnel DOD-wide by 25 percent by the end of fiscal year 2002, as required by the National Defense Authorization Act for 1998. The act requires DOD to cut about 13,300 positions in its headquarters activities. Rather, DOD has plans to reduce the number of headquarters positions by about 5,600, or 11 percent, by the end of fiscal year 2002 (see table 3). In November 1998, the military services proposed establishing a task force to develop alternatives for reducing their headquarters structure. As seen in table 3, the military departments account for about 4,500 of the cuts planned to DOD headquarters activities by the end of fiscal year 2002. (See apps. II, III, and IV for a breakdown by military department.) These cuts were directed primarily in the Quadrennial Defense Review and DRI. For example, as part of the Quadrennial Defense Review, the Navy planned to reduce its Atlantic and Pacific Fleet headquarters staff by approximately 1,260, or 20 percent, and the Marine Corps planned to reduce its management headquarters by about 200 positions. A Navy official noted that some of the cuts planned in the fleet headquarters are being revised in the fiscal year 2000 budget. On the other hand, the Army and the Air Force had planned to reduce their headquarters by less than the 10 percent required by DRI. As a result, the Air Force had to cut about 1,150 and the Army about 700 additional positions in headquarters to meet DRI threshold. Both military departments allocated the additional cuts primarily on a fair-share percentage basis across their headquarters activities. The Joint Staff plans to cut 135 positions and the unified commands about 260 positions by the end of fiscal year 2002. The Joint Staff plans to transfer about 75 military positions to the U.S. Strategic Command, return about 40 military positions to the military departments, and eliminate about 20 civilian positions. Finally, the reductions in the defense agencies are a combination of cuts directed in the Quadrennial Defense Review and DRI. In November 1998, the military departments proposed the establishment of a task force, chaired by the Under Secretary of the Army, to develop alternatives for reducing DOD’s headquarters structure. The task force would (1) identify processes and transactions, by functional area, that are significant drivers for the number of headquarters personnel; (2) inventory the number of workyears associated annually with the transactions and processes, by functions; and (3) assess the number of personnel that could be eliminated if transactions and processes were reengineered, automated, outsourced, or canceled. Each service will be allowed to reinvest any personnel and dollar savings from reducing headquarters activities. The proposal calls for the task force to issue its report to Congress in June 1999. We recommend that the Secretary of Defense determine the number and purpose of all personnel temporarily assigned to OSD by other DOD components. Detailees that do not meet OSD’s requirements of temporary staff should either be counted as OSD personnel or returned to their parent organizations. In comments on a draft of this report (see app. V), DOD concurred with our recommendation and noted that it is developing a system to account for all personnel detailed to OSD from other DOD components. As part of this process, DOD plans to determine the validity and continued need for current detailees. DOD also noted that the report was technically accurate, but believed it needed to provide a more balanced treatment of the Department’s efforts to downsize its headquarters activities. Specifically, DOD said the report does not include (1) the reason it requested relief from the 25-percent reduction required by the National Defense Authorization Act for Fiscal Year 1998 and (2) the principal reason civilian pay costs did not come down between fiscal year 1994 and 1998. According to DOD, it requested relief because headquarters activities have been reduced significantly since fiscal year 1992. We clarified the report to reflect DOD’s position and included information provided by DOD on personnel reductions in management headquarters activities back to fiscal year 1992. Regarding OSD civilian pay costs, DOD said the principal reason they did not decrease between fiscal year 1994 and 1998 was that nearly 200 civilians who worked for OSD, but were attributed to the pay accounts of other DOD elements, were transferred into the OSD civilian pay account in fiscal year 1998. Our analysis of changes in civilian pay accounted for these transfers. Still, key reasons civilian pay costs did not decline were that many of the positions eliminated were vacant, annual pay raises exceeded inflation costs, and the department incurred one-time costs associated with incentives to encourage people to leave early. To obtain information on DOD’s plans to achieve the 25-percent reduction in OSD and Washington Headquarters Services positions, we interviewed officials in OSD’s Office of the Director, Administration and Management, and reviewed files that documented the cuts during fiscal years 1995-98 and planned for fiscal year 1999. In addition, we reviewed the Defense Reform Initiative report. We obtained civilian salary costs for OSD, the defense support activities, and Washington Headquarters Services for fiscal years 1994-98 from the Washington Headquarters Services, Directorate for Budget and Finance. To determine how DOD plans to achieve the 25-percent reduction in personnel at all its headquarters activities, we interviewed officials in the Office of the Under Secretary of Defense, Comptroller, and manpower officials in each of the military departments and the Joint Staff. In addition, we identified the position cuts to headquarters activities programmed between fiscal year 1997 and 2002 in the Fiscal Year 1999 Future Years Defense Program. We conducted our work from June to December 1998 in accordance with generally accepted government auditing standards. We are providing copies of this report to other appropriate congressional committees; the Secretaries of Defense, the Air Force, the Army, and the Navy; the Chairman, Joint Chiefs of Staff; and the Director, Office of Management and Budget. We also will provide copies to other interested parties on request. Please call Marvin Casterline, Assistant Director, on (202) 512-9076 if you or your staff have any questions concerning this report. Major contributors to the report were Michael Kennedy, Ronald Leporati, and Justin Bernier. (33) (26) (34) (146) (88) (234) (33) (87) (104) (191) (32) (66) (330) (396) (66) (67) (51) (14) (88) (56) (14) (25) (29) (54) (42) (8) (1) (9) (23) (6) (2) (5) (5) (7) (2) (9) (30) (1) (1) (11) (27) (6) (33) (41) (34) (532) (528) (1,060) (34) Fiscal year 1994 baseline includes 230 positions in the Acquisition and Technology Defense Support Activity. Fiscal year 1994 baseline includes 300 positions in the Defense Manpower Data Center. Fiscal year 1994 baseline includes 29 positions in the Management Systems Support Office and 39 positions in the Plans and Program Analysis Support Center. Fiscal year 1994 baseline includes 120 positions in the Intelligence Program Support Group. Includes consultants, reimbursable details, and the administrative support and assistance program. Boards and commissions were not identified separately until fiscal year 1996. (173) (10) (193) (40) (870) (13) (41) (2) (1,240) (11) (62) (5) (9) Air Staff, Air National Guard, and Air Force Reserve Headquarters (196) (13) (60) (6) (150) (28) (453) (11) (732) (12) (1,591) (12) (2) (54) (13) (1,647) (11) (65) (6) Commandant of the Marine Corps (54) (23) (609) (15) (997) (24) (1,717) (16) (143) (8) (1,830) (14) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) efforts to reduce the size and cost of its headquarters activities, focusing on DOD's: (1) efforts to reduce headquarters positions and associated costs in the Office of the Secretary of Defense (OSD) and Washington Headquarters Services, as required by the National Defense Authorization Act for Fiscal Year 1997; (2) efforts to reduce headquarters positions across DOD as required by the National Defense Authorization Act for Fiscal Year 1998; and (3) reporting of personnel on temporary assignment to OSD from other DOD components. GAO noted that: (1) to comply with the requirement in the National Defense Authorization Act for Fiscal Year 1997, DOD plans to reduce OSD and its support activities by about 1,373 positions, or 27 percent, from its fiscal year (FY) 1994 levels by the end of FY 1999; (2) as of the end of FY 1998, DOD had cut 1,123 positions; (3) the majority of the cuts were based on DOD's November 1997 Defense Reform Initiatives report, which recommended that some offices be reorganized and that operational and program management functions be transferred to other DOD activities; (4) although the positions in OSD and its support activities have been reduced, civilian salary costs have not decreased because many of the positions eliminated were vacant and annual civilian pay raises have exceeded the inflation rate; (5) DOD plans to eliminate the remaining 250 positions in FY 1999; (6) DOD may not be accurately accounting for all personnel assigned to OSD; (7) some personnel temporarily assigned to OSD by other DOD components are functioning more as permanent staff and are not being reported as OSD personnel; (8) DOD has plans to cut about 5,600 positions across its headquarters activities by the end of FY 2002; (9) this is less than half of 13,300 cuts required by the National Defense Authorization Act for Fiscal Year 1998; (10) according to OSD officials, DOD did not develop plans consistent with the legislation because the Secretary of Defense had sought relief from the 1998 legislative requirement; and (11) however, when Congress did not repeal the provision, the services proposed that a task force be established to develop alternatives for reducing the headquarters' structure. |
Despite spending more than $1 billion annually on the federal food safety system, food safety remains a concern. For example, between May and November 2000, sliced and packaged turkey meat contaminated with Listeria monocytogenes caused 29 individuals in 10 states to become ill. In April and May of this year, imported cantaloupes contaminated with a pathogenic strain of Salmonella were linked to 54 illnesses and 2 deaths in 16 states, and in June six people in California were sickened, two of whom died, from eating oysters contaminated with Vibrio vulnificus. CDC estimates that foodborne diseases cause approximately 76 million illnesses, 325,000 hospitalizations, and 5,000 deaths each year. In medical costs and productivity losses, foodborne illnesses related to five principal pathogens cost the nation about $6.9 billion annually, USDA estimates. Twelve different agencies administer as many as 35 laws that make up the federal food safety system. Two agencies account for most federal food safety spending and regulatory responsibilities: the Food Safety and Inspection Service (FSIS), in USDA, is responsible for the safety of meat, poultry, and processed eggs, while the Food and Drug Administration (FDA), in HHS, is responsible for the safety of most other foods. Other agencies with food safety responsibilities and/or programs include HHS’ Centers for Disease Control and Prevention; USDA’s Agricultural Marketing Service (AMS), Animal and Plant Health Inspection Service (APHIS), Agricultural Research Service (ARS), and Grain Inspection, Packers and Stockyards Administration (GIPSA); the Department of Commerce’s National Marine Fisheries Service; the Department of the Treasury’s U.S. Customs Service and Bureau of Alcohol, Tobacco, and Firearms; the Environmental Protection Agency (EPA); and the Federal Trade Commission. Appendix I describes the food safety roles and responsibilities of these 12 agencies and shows each agency’s food safety funding and staffing level for fiscal year 2000. State and local governments also conduct inspection and regulation activities that help ensure the safety of foods produced, processed, or sold within their borders. State and local governments would generally be the first to identify and respond to deliberate acts of food contamination. During the past 25 years, we and other organizations, such as the National Academy of Sciences, have issued reports detailing problems with the federal food safety system and have made numerous recommendations for change. While many of these recommendations have been acted upon, food safety problems persist, largely because food safety responsibilities are still divided among several agencies that continue to operate under different regulatory approaches. The federal regulatory system for food safety did not emerge from a comprehensive design but rather evolved piecemeal, typically in response to particular health threats or economic crises. Addressing one new worry after another, legislators amended old laws and enacted new ones. The resulting organizational and legal patchwork has given responsibility for specific food commodities to different agencies and provided them with significantly different regulatory authorities and responsibilities. The number of agencies involved in regulating a sandwich illustrates the fragmented nature of the current food safety system. Figure 1 shows the federal responsibilities for regulating production and processing of a packaged ham and cheese sandwich and its ingredients. The responsible regulatory agency as well as the frequency with which inspections occur depends on how the sandwich is presented. FSIS inspects manufacturers of packaged open-face meat or poultry sandwiches (e.g., those with one slice of bread), but FDA inspects manufacturers of packaged closed-face meat or poultry sandwiches (e.g., those with two slices of bread). According to FSIS officials, the agency lacked the resources to inspect all meat and poultry sandwich manufacturers, so it was decided that FSIS would inspect manufacturers of the less common open-face sandwich, leaving inspection of other sandwich manufacturers to FDA. Although there are no differences in the risks posed by these products, wholesale manufacturers of open-face sandwiches sold in interstate commerce are inspected by FSIS daily, while wholesale manufacturers of closed-face sandwiches sold in interstate commerce are generally inspected by FDA on average once every 5 years. (See app. II for a list of other food products with similar risks that have different inspection frequencies because they are regulated by different agencies.) Because the nation’s food safety system evolved piecemeal over time, the nation has essentially two very different approaches to food safety—one at USDA and the other at FDA—that have led to inefficient use of resources and inconsistencies in oversight and enforcement. These problems, along with ineffective coordination between the agencies, have hampered and continue to impede efforts to address public health concerns associated with existing and emerging food safety risks. The following examples represent some of the problems we identified during our reviews of the nation’s food safety system. Federal food safety expenditures are based on legal requirements, not on risk. As shown in figure 2, funding for ensuring the safety of products is disproportionate to the level of consumption of those products because the frequency of inspection is based not on risk but on the agencies’ legal authority and regulatory approach. Likewise, funding for ensuring the safety of products is disproportionate to the percentage of foodborne illnesses linked to those products. For example, to ensure the safety of meat, poultry, and processed egg products in fiscal year 1999, FSIS spent about $712 million to, among other things, inspect more than 6,000 meat, poultry, and egg product establishments and conduct product inspections at 130 import establishments. FSIS’ expenditures reflect its interpretation of federal law as requiring daily inspection of meat and poultry processing plants and its traditional implementation of its statutory inspection mandate through continuous government inspection of every egg products plant and every meat and poultry slaughter plant, including the examination of every carcass slaughtered. These plants account for about 20 percent of federally regulated foods and 15 percent of reported foodborne illnesses. In comparison, FDA, which has responsibility for all foods except meat, poultry, and processed egg products and has no mandated inspection frequencies, spent about $283 million to, among other things, oversee some 57,000 food establishments and 3.7 million imported food entries. These establishments and entries account for about 80 percent of federally regulated foods and 85 percent of reported foodborne illnesses. Federal agencies’ authorities to enforce food safety requirements differ. USDA agencies have the authority to (1) require food firms to register so that they can be inspected, (2) prohibit the use of processing equipment that may potentially contaminate food products, and (3) temporarily detain any suspect foods. Conversely, FDA lacks such authority and is often hindered in its food oversight efforts. For example, both USDA and FDA oversee recalls when foods they regulate are found to be contaminated or adulterated. However, if a USDA-regulated company does not voluntarily conduct the recall, USDA can detain the product for up to 20 days while it seeks a court order to seize the food. Because FDA does not have detention authority, it cannot ensure that tainted food is kept out of commerce while it seeks a court-ordered seizure. As another example, while FDA is responsible for overseeing all seafood-processing firms operating in interstate commerce, the agency does not have an effective system to identify the firms subject to regulation because there is no registration requirement for seafood firms. As a result, some firms may not be subjected to FDA oversight, thus increasing the risk of consumers’ contracting a foodborne illness from unsafe seafood. USDA and FDA implementation of the new food safety approach is inconsistent. Since December 1997, both USDA and FDA have implemented a new science-based regulatory approach—the Hazard Analysis and Critical Control Point (HACCP) system—for ensuring the safety of meat, poultry, and seafood. The HACCP system places the primary responsibility on industry, not government inspectors, for identifying and controlling hazards in the production process. However, as we discussed in previous reports, FDA and USDA implemented the HACCP system differently. While USDA reported that in 1999, 96 percent of federally regulated plants were in compliance with the basic HACCP requirements for meat and poultry, FDA reported that less than half of federally regulated seafood firms were in compliance with HACCP requirements. In addition, while USDA collects data on Salmonella contamination to assess the effectiveness of its HACCP system for meat and poultry, FDA does not have similar data for seafood. Without more effective compliance programs and adequate performance data, the benefits of HACCP will not be fully realized. Oversight of imported food is inconsistent and unreliable. As we reported in 1998, the meat and poultry acts require that, before a country can export meat and poultry to the United States, FSIS must make a determination that the exporting country’s food safety system provides a level of safety equivalent to the U.S. system. Under the equivalency requirement, FSIS has shifted most of the responsibility for ensuring product safety to the exporting country. The exporting country performs the primary inspection, allowing FSIS to leverage its resources by focusing its reviews on verifying the efficacy of the exporting countries’ systems. In addition, until FSIS approves release of imported meat and poultry products into U.S. commerce, they generally must be kept in an FSIS- registered warehouse. In contrast, FDA lacks the legal authority to require that countries exporting foods to the United States have food safety systems that provide a level of safety equivalent to ours. Without such authority, FDA must rely primarily on its port-of-entry inspections to detect and bar the entry of unsafe imported foods. Such an approach has been widely discredited as resource-intensive and ineffective. In fiscal year 2000, FDA inspections covered about 1 percent of the imported food entries under its jurisdiction. In addition, FDA does not control imported foods or require that they be kept in a registered warehouse prior to FDA approval for release into U.S. commerce. As a result, some adulterated imports that were ultimately refused entry by FDA had already been released into U.S. commerce. For example, in 1998 we reported that in a U.S. Customs Service operation called “Bad Apple,” about 40 percent of the imported foods FDA checked and found in violation of U.S. standards were never redelivered to Customs for disposition. These foods were not destroyed or reexported as required and presumably were released into U.S. commerce. Claims of health benefits for foods may be treated inconsistently by different federal agencies. Because three federal agencies are charged with enforcing different statutes, a product’s claim of health benefits might be denied by one agency but allowed by another. FDA, the Federal Trade Commission, and USDA share responsibility for determining which claims regarding health benefits are allowed in labeling and advertising of foods and dietary supplements. FDA has authorized only a limited number of specific health claims for use on product labels. However, the Federal Trade Commission may allow a health claim in an advertisement as long as it meets the requirements of the Federal Trade Commission Act, even if FDA has not approved it for use on a label. Furthermore, USDA has not issued regulations to adopt any of the FDA- approved health claims for use on the products that it regulates, such as pot pies, soups, or prepared meals containing over a certain percentage of meat or poultry. Rather, USDA reviews requests to use a health claim, including those approved by FDA, on a case-by-case basis. Effective enforcement of limits on certain drugs in food-producing animals is hindered by the regulatory system’s fragmented organizational structure. FDA has regulatory responsibility for enforcing animal-drug residue levels in food producing animals. However, FDA in conjunction with the states have only investigated between 43 and 50 percent of each year’s USDA animal-drug residue referrals made between fiscal year 1996 and 2000. According to FDA officials, the agency lacks the resources to conduct prompt follow-up investigations and does not have an adequate referral assignment and tracking system to ensure that investigations are made in a timely manner. FDA has relied on the states, through contracts and cooperative agreements, to conduct the bulk of the investigations. FDA only has resources to investigate repeat violators. As a result, animal producers not investigated may continue to use animal drugs improperly putting consumer health at greater risk. In the absence of a unified food safety system, federal agencies have attempted to coordinate their efforts to overcome fragmentation and avoid duplication or gaps in coverage. While we believe that interagency coordination is important and should be continued, history has shown that such efforts are difficult to conduct successfully. The following examples represent some of the coordination problems we have found. Fragmented organizational structure poses challenges to U.S. efforts to address barriers to agricultural trade. The organizational structure for food safety complicates U.S. efforts to address foreign sanitary and phytosanitary (SPS) measures. SPS measures are designed to protect humans, animals, or the territory of a country from the spread of a pest or disease, among other things. However, the U.S. Trade Representative and USDA are concerned that some foreign SPS measures may be inconsistent with international trade rules and may unfairly impede the flow of agricultural trade. In 1997, we reported that the federal structure for addressing foreign SPS measures was complex because 12 federal agencies had some responsibility for addressing problems related to SPS measures and that no one agency was directing federal efforts. We found, among other things, that the involvement of multiple agencies with conflicting viewpoints made it difficult to evaluate, prioritize, and develop unified approaches to address such measures. While, the U.S. Trade Representative and USDA took some actions to respond to our report, including establishing mechanisms to improve interagency coordination and decision-making, it remains to be seen whether such actions will effectively address the coordination problems over the long run. Different statutory responsibilities may limit the ability of agencies to coordinate successfully. As we reported in August 1998, because FDA and FSIS have different statutory responsibilities, important information about animal feed contaminated with dioxin (a suspected carcinogen) and animals that had consumed this feed was not effectively communicated to the food industry. FDA and FSIS worked together to decide on the preferred course of action for handling the contaminated feed and animals, and each agency was responsible for communicating its decisions to producers or processors under its jurisdiction. However, the agencies did not necessarily communicate all required actions to all affected parties. For example, when officials from FDA, the agency responsible for regulating animal feed, met with meat and poultry producers, their primary concern was with the contaminated feed, not with the animals that had consumed it. Thus, they did not necessarily tell these producers about the actions they should take for their affected animals. FSIS, the agency responsible for regulating meat and poultry processors, sent word of dioxin-testing requirements to the processors and trade associations but did not notify meat and poultry producers, over which it has no jurisdiction. The need for extensive coordination may impede prompt resolution of food safety problems. Despite FSIS’ and FDA’s efforts to coordinate their efforts on egg safety, more than 10 years have past since the problem of bacterial contamination of intact shell eggs was first identified and a comprehensive safety strategy has yet to be implemented. In 1988, for the first time, some intact shell eggs were discovered to be contaminated internally with the pathogenic bacteria Salmonella enteritidis. In 1992, we reported that due to coordination difficulties resulting from the split regulatory structure for eggs, the federal government had not agreed on a unified approach to address this problem. In July 1999, we reported that the federal government still had not agreed on a unified approach to address the problem. In July 2000, FDA and FSIS issued a “current thinking” paper identifying actions that would decrease the food safety risks associated with eggs. However, as of September 2001, comprehensive proposed regulations to implement these actions had not yet been published. Continuity of coordination efforts is hampered by changes in executive branch leadership. The President’s Council on Food Safety, created in 1998, was tasked with developing a comprehensive strategic plan for federal food safety activities. In August 2000, the council agreed to initiate an interagency process to address our recommendation that FDA and the Department of Transportation, among others, enhance food safety protections by developing a strategy to regulate animal feed while in transport. While the council published its strategic food safety plan in January 2001 that included numerous “action items” and recommendations for improving the federal food safety system, the council did not address a transport strategy for animal feed. Moreover, the council has not met since publishing the strategic plan, and it remains to be seen whether the new administration will act on the council’s recommendations. For example, the council’s strategic plan included an action item to allocate enforcement resources based on the potential risk to public health, but the President’s fiscal year 2002 budget showed little change in the allocation of food safety resources among agencies. We continue to believe, as we testified in 1999, that a single, independent food safety agency administering a unified, risk-based food safety system is the most effective solution to the current fragmentation of the federal food safety system. While there are difficulties involved in establishing a new government agency and opinions differ about the best organizational model for food safety, there is widespread national and international recognition of the need for uniform laws and consolidation of food safety activities under a single organization. Both the National Academy of Sciences and the President’s Council on Food Safety have joined us in calling for fundamental changes to the federal food safety system, including a reevaluation of the system’s organizational structure. Likewise, several former senior-level government officials that were responsible for federal food safety activities have called for major organizational and legal changes. Internationally, four countries—Canada, Denmark, Great Britain, and Ireland—have each recently consolidated their food safety responsibilities under a single agency. Several other countries or government organizations may be considering this option as well, including Argentina, Chile, Hong Kong, the Netherlands, and the European Union. In an August 1998 report, the National Academy of Sciences concluded that the current fragmented federal food safety system is not well equipped to meet emerging challenges. The academy found that “there are inconsistent, uneven, and at times archaic food statutes that inhibit use of science-based decision-making in activities related to food safety, and these statutes can be inconsistently interpreted and enforced among agencies.” As such, the academy concluded that to create a science-based food safety system current laws must be revised. Accordingly, it recommended that the Congress change federal statutes so that food safety inspection and enforcement are based on scientific assessments of public health risks. The academy also recommended that food safety programs be administered by a single official in charge of all federal food safety resources and activities, including outbreak management, standard- setting, inspection, monitoring, surveillance, risk assessment, enforcement, research, and education. According to the academy’s report, many members of the committee tasked to conduct the study believed that a single agency headed by one administrator was the best way to provide the central, unified framework critical to improving the food safety system. However, assessing alternative organizational approaches was not possible in the time available or part of the committee’s charge. Therefore, the committee did not recommend a specific organizational structure but instead provided several possible configurations for illustrative purposes. These were forming a Food Safety Council of representatives from the agencies, with a central chair appointed by the President, reporting to the Congress and having control of resources; designating one current agency as the lead agency and making the head of that agency the responsible individual; establishing a single agency reporting to one current cabinet-level establishing an independent single agency at the cabinet level. The committee also proposed that a detailed examination of specific organizational changes be conducted as a part of a future study. Such a study would be in keeping with the Congress’ intent, as expressed in the fiscal year 1998 conference report on food safety appropriations. This conference report directed that if the academy’s study recommended an independent food safety agency, a second study be conducted to determine the agency’s responsibilities to ensure that the food safety system protects the public health. In response to the academy’s report, the President established a Council on Food Safety and charged it to develop a comprehensive strategic plan for federal food safety activities, among other things. The Council’s Food Safety Strategic Plan, released on January 19, 2001, recognized the need for a comprehensive food safety statute and concluded that “the current organizational structure makes it more difficult to achieve future improvements in efficiency, efficacy, and allocation of resources based on risk.” The council analyzed several organizational reform options. Two of the options involved enhanced coordination within the existing structure, and the other two involved consolidation of responsibilities, either within an existing organization or a stand-alone food safety agency. The council’s analysis of the options found that coordination may lead to marginal improvements but do little to address the fragmentation, duplication, and conflict inherent in the current system. The council concluded that consolidation could eliminate duplication and fragmentation, create a single voice for food safety, facilitate priority setting and resource allocation based on risk, and provide greater accountability. The council recommended the development of comprehensive, unifying food safety legislation to provide a risk-based, prevention-oriented system for all food, followed by the development of a corresponding organizational reform plan. Former key government food safety officials at USDA and FDA have acknowledged the limitations of the current regulatory system. As shown in table 1, many former government officials recognize the need for and support the transition to a single food safety agency. Some of these officials believe the single agency could be consolidated within an existing department, and others favor an independent agency. Regardless, they all recognize the need for legislative overhaul to provide a uniform, risk-based approach to food safety. Although in the past the U.S. food safety system has served as a model for other countries, recently Canada, Denmark, Great Britain, and Ireland have taken the lead by consolidating much of their food safety responsibilities in a single agency in each country. As we reported in 1999, responding to heightened public concerns about the safety of their food supplies, Great Britain and Ireland chose to consolidate responsibilities in agencies that report to or are represented by their ministers of health. The British consolidated food safety activities into an independent agency, represented before Parliament by the Minister of Health, largely because of the agriculture ministry’s perceived mishandling of an outbreak of Bovine Spongiform Encephalopathy (commonly referred to as “mad cow” disease). Public opinion viewed the agriculture ministry, which had the dual responsibilities of promoting agriculture and the food industry and regulating food safety, as slow to react because it was too concerned about protecting the cattle industry. Canada and Denmark were more concerned about program effectiveness and cost saving and accordingly consolidated activities in agencies that report to their ministers of agriculture, who already controlled most of the food safety resources. For example, Canada did not face a loss of public confidence, as did Great Britain and Ireland, but instead faced a budgetary crisis; it therefore sought ways to reduce federal expenditures. Denmark reorganized the whole Ministry of Agriculture, and all food regulation is now in the newly created Ministry of Food, Agriculture, and Fisheries. Recent events have raised the specter of bioterrorism as an emerging risk factor for our food safety system. Bioterrorism is the threatened or intentional release of biological agents (viruses, bacteria, or their toxins) for the purpose of influencing the conduct of government or of intimidating or coercing a civilian population. These agents can be released through food as well as the air, water, or insects. To respond to potential bioterrorism, federal food safety regulatory agencies need to be prepared to efficiently coordinate their activities and respond quickly to protect the public health. Under the current structure, we believe that there are very real doubts about the system’s ability to detect and quickly respond to any such event. To date, the only known bioterrorist act in the United States involved deliberate contamination of food with a biological agent. In 1984, a religious cult intentionally contaminated salad bars in local restaurants in Oregon to prevent people from voting in a local election. Although no one died, 751 people were diagnosed with foodborne illnesses. Since then federal officials identified only one other act of deliberate food contamination with a biological agent that affected 13 individuals in 1996, but numerous threats and hoaxes have been reported. Both FDA and FSIS have plans and procedures for responding to deliberate food contamination incidents, but the effectiveness of these procedures is largely untested for contamination involving biological agents. Therefore, we recommended in 1999 that FDA and FSIS test their plans and procedures using simulated exercises that evaluate the effectiveness of federal, state, and local agencies’ and industry’s responses to various types of deliberate food contamination with a biological agent. Moreover, in September 2001 we reported that coordination of federal terrorism research, preparedness, and response programs is fragmented.Separately, we reported that several relevant agencies have not been included in bioterrorism-related policy and response planning. For example, USDA officials told us that their department was not involved, even though it would have key responsibilities if terrorists targeted the food supply. To conclude, Mr. Chairman, we believe that creating a single food safety agency to administer a uniform, risk-based inspection system is the most effective way for the federal government to resolve long-standing problems; address emerging food safety issues, including acts of deliberate contamination involving biological agents; and ensure the safety of the nation’s food supply. In addition, the National Academy of Sciences and the President’s Council on Food Safety have reported that comprehensive, uniform, and risk-based food safety legislation is needed to provide the foundation for a consolidated food safety system. While we believe the case for a single food safety agency has been compelling for some time, recent events make this action more imperative. Numerous details, of course, remain to be worked out but it is essential that the fundamental decision to create such an agency be made and the process for resolving outstanding technical issues be started. To provide more efficient, consistent, and effective federal oversight of the nation’s food supply, we recommend that the Congress consider enacting comprehensive, uniform and risk-based food safety legislation commissioning the National Academy of Sciences or a blue ribbon panel to conduct a detailed analysis of alternative organizational food safety structures and report the results of such an analysis to the Congress. Pending Congressional action to establish a single food safety agency and enact uniform, risk-based legislation, we recommend that the Secretary of Agriculture, the Secretary of Health and Human Services, and the Assistant to the President for Science and Technology, as joint chairs of the President’s Council on Food Safety, reconvene the council to facilitate interagency coordination on food safety regulation and programs. For future contacts regarding this testimony, please contact Robert A. Robinson at (202) 512-3841. Individuals making key contributions to this testimony included Lawrence J. Dyckman, Keith W. Oleson, Stephen D. Secrist, Diana P. Cheng, Maria C. Gobin, Natalie H. Herzog, and John M. Nicholson Jr. Agency Food and Drug Administration (FDA), within the Department of Health and Human Services (HHS), is responsible for ensuring that domestic and imported food products (except meat, poultry, and processed egg products) are safe, wholesome, and properly labeled. The Federal Food, Drug, and Cosmetic Act, as amended, is the major law governing FDA’s activities to ensure food safety and quality. The act also authorizes FDA to conduct surveillance of all animal drugs, feeds, and veterinary devices to ensure that drugs and feeds used in animals are safe, effective, and properly labeled and produce no human health hazards when used in food-producing animals. Centers for Disease Control and Prevention (CDC), within HHS, is charged with protecting the nation’s public health by leading and directing the prevention and control of diseases and responding to public health emergencies. CDC conducts surveillance for foodborne diseases; develops new epidemiological and laboratory tools to enhance surveillance and detection of outbreaks; and performs other activities to strengthen local, state, and national capacity to identify, characterize, and control foodborne hazards. CDC engages in public health activities related to food safety under the general authority of the Public Health Service Act, as amended. Food Safety and Inspection Service (FSIS), within the U.S. Department of Agriculture (USDA), is responsible for ensuring that meat, poultry, and some eggs and egg products moving in interstate and foreign commerce are safe, wholesome, and correctly marked, labeled, and packaged. FSIS carries out its inspection responsibilities under the Federal Meat Inspection Act, as amended, the Poultry Products Inspection Act, as amended, and the Egg Products Inspection Act, as amended. Animal and Plant Health Inspection Service (APHIS), within USDA, is responsible for ensuring the health and care of animals and plants. APHIS has no statutory authority for public health issues unless the concern to public health is also a concern to the health of animals or plants. APHIS identifies research and data needs and coordinates research programs to protect the animal industry against pathogens or diseases that are a risk to humans to improve food safety. Grain Inspection, Packers and Stockyards Administration (GIPSA), within USDA, is responsible for establishing quality standards and providing for a national inspection system to facilitate the marketing of grain and other related products. Certain inspection services, such as testing corn for the presence of aflatoxin and starlink, enable the market to assess the value of a product on the basis of its compliance with contractual specifications and FDA requirements. GIPSA has no regulatory responsibility regarding food safety. Under a memorandum of understanding with FDA, GIPSA reports to FDA certain lots of grain, rice, pulses, or food products (which were officially inspected as part of GIPSA’s service functions) that are considered objectionable under the Federal Food, Drug, and Cosmetic Act, as amended, the U.S. Grain Standards Act, as amended, and the Agriculture Marketing Act of 1946, as amended. Agricultural Marketing Service (AMS), within USDA, is primarily responsible for establishing quality and condition standards and for grading the quality of dairy, fruit, vegetable, livestock, meat, poultry, and egg products. As part of this grading process, AMS considers safety factors, such as the cleanliness of the product. AMS also runs a voluntary pesticide data program and carries out a wide array of programs to facilitate marketing. It carries out these programs under more than 50 statutes, including the Agricultural Marketing Agreement Act of 1937, as amended; the Agricultural Marketing Act of 1946, as amended; the Egg Products Inspection Act, as amended; the Export Apple and Pear Act, as amended; the Export Grape and Plum Act, as amended; the Federal Seed Act; and the Food Quality Protection Act. AMS is largely funded with user fees. The agency did not specify its food safety resources. We did not obtain these agencies’ food safety budgets due to the small amount of funds for these activities in previous years. Food Safety: CDC Is Working to Address Limitations in Several of Its Foodborne Disease Surveillance Systems (GAO-01-973, Sept. 7, 2001). Food Safety: Overview of Federal and State Expenditures (GAO-01-177, Feb. 20, 2001). Food Safety: Federal Oversight of Seafood Does Not Sufficiently Protect Consumers (GAO-01-204, Jan. 31, 2001). Food Safety: Actions Needed by USDA and FDA to Ensure That Companies Promptly Carry Out Recalls (GAO/RCED-00-195, Aug. 17, 2000). Food Safety: Improvements Needed in Overseeing the Safety of Dietary Supplements and “Functional Foods” (GAO/RCED-00-156, July 11, 2000). Meat and Poultry: Improved Oversight and Training Will Strengthen New Food Safety System (GAO/RCED-00-16, Dec. 8, 1999). Food Safety: Agencies Should Further Test Plans for Responding to Deliberate Contamination (GAO/RCED-00-3, Oct. 27, 1999). Food Safety: U.S. Needs a Single Agency to Administer a Unified, Risk- Based Inspection System (GAO/T-RCED-99-256, Aug. 4, 1999). Food Safety: U.S. Lacks a Consistent Farm-to-Table Approach to Egg Safety (GAO/RCED-99-184, July 1, 1999). Food Safety: Experiences of Four Countries in Consolidating Their Food Safety Systems (GAO/RCED-99-80, Apr. 20, 1999). Food Safety: Opportunities to Redirect Federal Resources and Funds Can Enhance Effectiveness (GAO/RCED-98-224, Aug. 6, 1998). Food Safety: Federal Efforts to Ensure the Safety of Imported Foods Are Inconsistent and Unreliable (GAO/RCED-98-103, Apr. 30, 1998). Food Safety: Agencies’ Handling of a Dioxin Incident Caused Hardships for Some Producers and Processors (GAO/RCED-98-104, Apr. 10, 1998). Agricultural Exports: U.S. Needs a More Integrated Approach to Address Sanitary/Phytosanitary Issues (GAO/NSIAD-98-32, Dec. 11, 1997). | Tens of millions of Americans become ill and thousands die each year from eating unsafe foods. The current food safety system is a patchwork structure that cannot address existing and emerging food safety risks. The current system was cobbled together over many years to address specific health threats from particular foods. The resulting fragmented organizational and legal structure causes inefficient use of resources, inconsistent oversight and enforcement, and ineffective coordination. Food safety issues must be addressed comprehensively--that is, by preventing contamination through the entire food production cycle, from farm to table. A single, food safety agency responsible for administering a uniform set of laws is needed to resolve long-standing problems with the current system; deal with emerging food safety issues, such as the safety of genetically modified foods or deliberate acts of contamination; and ensure a safe food supply. |
The September 11 attacks illustrated the vulnerabilities in the visa process when it became known that all 19 of the terrorist hijackers had been issued visas to enter the United States. Before the attacks, the State Department’s visa operations focused primarily on screening applicants to determine whether they intended to work or reside illegally in the United States. In deciding on who should receive a visa, consular officers relied on the State Department’s consular “lookout” system, a name check system that incorporates information from many agencies, as the primary basis for identifying potential terrorists. Consular officers were encouraged to facilitate legitimate travel and, at some posts we visited, faced pressure to issue visas. The State Department gave overseas consular sections substantial discretion in determining the level of scrutiny applied to visa applications and encouraged streamlined procedures to provide customer service and deal with a large workload. As a result, according to State Department officials and documents, consular sections worldwide adopted practices that reduced the review time for visa applications. For example, some posts decided not to interview applicants who appeared likely to return to their country at the end of their allotted time in the United States. Since the terrorist attacks, the U.S. government has introduced some changes to strengthen the visa process. For example, the State Department has, with the help of other agencies, almost doubled the number of names and the amount of information in the lookout system. Further, the Department began seeking new or additional interagency clearances on selected applicants to screen out terrorists, although these checks were not always completed by other U.S. agencies in a thorough or timely manner. We also observed that consular officers at some of the posts we visited were spending more time reviewing visa applications and interviewing applicants; they were able to do so, at least temporarily, because the number of visa applications decreased dramatically after September 11. While these actions have strengthened the visa process, our work in 2002 showed that there were widely divergent practices and procedures among and within overseas posts regarding (1) the authority of consular officers to deny questionable applicants a visa, (2) the role of the visa process in ensuring national security, and (3) the types of changes in posts’ visa policies and procedures that are appropriate given the need for heightened border security. Also, the Departments of State and Justice disagreed on the evidence needed to deny a visa on terrorism grounds. Most consular officers at the posts we visited stated that more comprehensive guidance and training would help them use the visa process as an antiterrorism tool to detect questionable applicants. In July 2002, the Secretary of State acknowledged that the visa process needed to be strengthened and indicated that the State Department is working to identify areas for improvement. In addition, the State Department has stressed that it must have the best interagency information available on persons who are potential security risks in order to make good visa decisions. The additional data received from the intelligence and law enforcement community has increased State’s access to information for use in the visa adjudication process. In addition, State indicated that it will work with Homeland Security to establish the systems and procedures that will ensure seamless sharing of information in the future. We also found that human capital limitations are a concern, as some consular sections may need more staff if the number of visa applicants returns to pre-September 11 levels or if State continues to institute new security checks for visa applicants. At some posts the demand for visas combined with increased workload per visa applicant still exceeded available staff, as evidenced by the waiting time for a visa appointment and in overtime of consular staff. Moreover, several posts we visited reported that they could manage their existing workload with current staffing but would need more staff if they faced an increase in either security clearance procedures or visa applications. In our October 2002 report, we concluded that the visa process could be an important tool to keep potential terrorists from entering the United States but that weaknesses limited its effectiveness as an antiterrorism tool. The State Department needed to improve implementation of the visa process to increase its effectiveness and consistency among posts. To strengthen the visa process as an antiterrorism tool, we recommended that the Secretary of State, in consultation with appropriate agencies, establish clear policy on addressing national security concerns through the visa process that is balanced with the desire to facilitate legitimate travel, provide timely customer service, and manage workloads; develop comprehensive, risk-based guidelines and standards on how consular affairs should use the visa process as a screen against potential terrorists; reassess staffing for visa operations in light of the current and anticipated number of visa applications and, if appropriate, request additional human resources to ensure that consular sections have adequate staff with necessary skills; and provide consular training courses to improve interview techniques, recognize fraudulent documents, understand terrorism trends, and better use the name check system. To address visa issues requiring coordination and actions across several agencies, we recommended that the Department of Homeland Security coordinate with appropriate agencies to establish governmentwide guidelines on the level of evidence needed to deny a visa on terrorism grounds under provisions of the Immigration and Nationality Act; reassess interagency headquarters’ security checks on visa applicants to verify that all the checks are necessary and promptly conducted, and provide clear guidance to overseas posts and headquarters agencies on their roles in conducting these checks; consider reassessing, on an interagency basis, visas issued before the implementation of the new security checks; reexamine visa operations on a regular basis to ensure that the operations effectively contribute to the overall national strategy for ensure that law enforcement and intelligence agencies promptly provide information to the State Department on persons who may pose a security risk and, therefore, should not receive visas. In its response to our recommendations, the Department of State noted that it has acted on or is currently acting on some of the issues we reported and continues to reexamine its visa process. Moreover, in January 2003, the Assistant Secretary for Consular Affairs reported that State plans to use our recommendations as a roadmap for improvements within the Bureau of Consular Affairs and in consular sections around the world. State has also indicated that it is currently undertaking a number of initiatives to review visa policies, staffing, and training needs. Furthermore, State said it is looking at refining various screening programs and will coordinate with other agencies to reassess interagency headquarters’ security checks. In our recent work on visa revocations, we again found weaknesses caused by the lack of comprehensive policies and coordination between agencies. The visa revocation process can be an important tool to prevent potential terrorists from entering the United States. Ideally, information on suspected terrorists would reach the Department of State before it decides to issue a visa; however, there will always be some cases in which the information arrives after the visa has been issued. Revoking a visa can mitigate this problem, but only if State notifies the appropriate agencies and if those agencies take appropriate actions to deny entry or investigate persons with a revoked visa. In our June 2003 report, we identified the policies and procedures of several agencies that govern the visa revocation process and determined the effectiveness of the process. We focused on all 240 visas that State revoked for terrorism concerns from September 11, 2001, to December 31, 2002. Our analysis indicated that the U.S. government has no specific written policy on the use of visa revocations as an antiterrorism tool and no written procedures to guide State in notifying relevant agencies of visas that have been revoked on terrorism grounds. State and INS have written procedures that guide some types of visa revocations; however, neither they nor the FBI has written internal procedures for notifying appropriate personnel to take action on visas revoked by the State Department. State and INS officials could articulate their informal policies and procedures for how and what purpose their agencies have used the process to keep terrorists out of the United States, but neither they nor FBI officials had specific policies or procedures that covered investigating, locating, or taking appropriate action in cases where the visa holder had already entered the country. The lack of formal, written policies and procedures may have contributed to systemic weaknesses in the visa revocation process that increase the probability of a suspected terrorist entering or remaining in the United States. At the time of visa revocation, State should notify its consular officers at overseas posts, the Department of Homeland Security, and the FBI. State would have to provide notice of revocation, along with supporting evidence to the appropriate units within Homeland Security and the FBI, which would allow them to take appropriate action. In our review of the 240 visa revocations, we found that (1) appropriate units within INS and the FBI did not always receive timely notification of the revocations; (2) lookouts were not consistently posted to the agencies’ watch lists; (3) 30 individuals whose visas were revoked on terrorism grounds entered the United States and may still remain in the country; (4) INS investigators were not usually notified of individuals with revoked visas who had entered the United States and therefore did not open investigations on them; and (5) the FBI did not investigate individuals with revoked visas unless these individuals were also in TIPOFF. For instance: In a number of cases, notification between State and the appropriate units within INS did not take place or was not completed in a timely manner. For example, INS officials said they did not receive any notice of the revocations from State in 43 of the 240 cases. In another 47 cases, the INS Lookout Unit received the revocation notice only via a cable, which took, on average, 12 days to reach the Unit. In cases in which the INS Lookout Unit had received notification, it generally posted information on these revocations in its lookout database within 1 day of receiving the notice. In cases where it was not notified, it could not post information on these individuals in its lookout database, which precluded INS inspectors at ports of entry from knowing that these individuals had had their visas revoked. Moreover, the State Department neglected to enter the revocation action for 64 of the 240 cases into its own watch list. GAO’s analysis of INS arrival and departure data indicates that 29 individuals entered the United States before their visas were revoked and may still remain in the country. These data also show that INS inspectors admitted at least four other people after the visa revocation, one of whom may still remain in the country. However, in testimony on June 18, 2003, the FBI said that none of these 30 individuals posed a terrorist threat since they were not in TIPOFF, a State-operated interagency terrorist watch list that FBI’s Foreign Terrorist Tracking Task Force monitors. State Department officials told us during our review that State relied on sources of information in addition to TIPOFF in making visa revocation decisions. INS inspectors prevented at least 14 others from entering the country because the INS watch list included information on the revocation action or had another lookout on them. INS investigators said they did not open cases on these individuals with revoked visas who had entered the United States because their unit had not been notified that State had revoked visas because of terrorism concerns and that these persons had entered the country. They added that, in the 10 cases that were referred to them, they conducted a full investigation of possible immigration violations. INS officials said that it would be challenging to remove individuals with revoked visas who had entered the United States unless they were in violation of their immigration status. Homeland Security officials said that the issue of whether a visa revocation, after an individual is admitted on that visa, has the effect of rendering the individuals out-of-status is unresolved legally. FBI officials told us they were not concerned about individuals whose visas were revoked because of terrorism concerns unless the individuals’ names were in TIPOFF. They said that they had a system in place to monitor individuals in TIPOFF who enter the country but that they would not investigate individuals who were not in TIPOFF based solely on the revocation notice from State. FBI’s position indicates that FBI is not taking into account all sources of information that State uses in determining if a person may pose a terrorism threat. We concluded that the visa process could be an important tool to keep potential terrorists from entering the United States. However, there are currently major gaps in the notification and investigation processes. One reason for this is that there are no comprehensive written policies and procedures on how notification of a visa revocation should take place and what agencies should do when they are notified. As a result, there is heightened risk that persons who State believed should not have been issued a visa because of terrorism concerns could enter the country with revoked visas or be allowed to remain after their visas are revoked without undergoing investigation or monitoring. To strengthen the visa revocation process as an antiterrorism tool, we recommended that the Secretary of Homeland Security, in conjunction with the Secretary of State and the Attorney General develop specific policies and procedures for the interagency visa revocation process to ensure that notification of visa revocations for suspected terrorists and relevant supporting information is transmitted from State to immigration and law enforcement agencies and their respective inspection and investigation units in a timely manner; develop a specific policy on actions that immigration and law enforcement agencies should take to investigate and locate individuals whose visas have been revoked for terrorism concerns and who remain in the United States after revocation; and determine if persons with visas revoked on terrorism grounds are in the United States and, if so, whether they pose a security threat. In response to our recommendations, the Department of State testified that the Bureau of Consular Affairs is engaged in an effort to formalize standard operating procedures. The Department of Homeland Security also remarked that it was working to better standardize its procedures. The FBI determined that 47 of the 240 persons with revoked visas were in TIPOFF and therefore could pose a terrorism threat but that it had no indication that any of these individuals were in the country. The Department of State has recently issued guidance to its posts about using the visa process as an antiterrorism tool. In May 2003, the Secretary of State announced that, by August 1, 2003, with a few exceptions, all foreign individuals seeking to visit the United States would be interviewed prior to receiving a visa. The purpose of this guidance is to tighten the visa process to protect U.S. security and to prepare for the eventual fingerprinting of applicants that State must undertake to meet the legislated mandate to include a biometric identifier with issued visas. To comply with the new guidance, some posts may have to make substantial changes in how they handle nonimmigrant applications. State acknowledges that posts may find that personnel or facility resources are not adequate to handle the additional number of interviews. Even though State expects interview backlogs, the Department has indicated that posts are to implement the interview requirement with existing resources. It is not certain what impact the new policy will have on visa issuance. However, education, business, and government officials have expressed concern that it was already taking too long to issue visas and that without a commensurate increase in resources to accommodate the heavier workload that may result from the new requirement, there could be serious delays for those seeking to visit the United States. In March 2003, the House Committee on Science held a hearing on “Dealing with Foreign Students and Scholars in the Age of Terrorism: Visa Backlogs and Tracking Systems.” In June 2003, the House Committee on Small Business held a hearing on “The Visa Approval Backlog and its Impact on American Small Business. “ In both hearings, higher education and business leaders and agency officials testified on the negative impacts of delays in issuing visas. The testimonies also highlighted the difficulties of balancing national security interests with the desire to facilitate travel. At the request of the House Committee on Science, we are currently examining the amount of time taken to adjudicate visa applications from foreign science students and scholars. As part of this work, we will be looking at how the new interview policy will affect the process. Before I conclude my statement, I would like to raise some questions that the subcommittee may want to consider in its oversight role: Have the Departments of State, Homeland Security, and Justice reached agreement on how best to communicate information on individuals who should not be issued visas and on individuals whose visas have been revoked? Have the Departments of State, Homeland Security, and Justice agreed on the level of evidence needed to deny and revoke visas? Does the Department of State have adequate number of trained staff for visa processing, especially if the number of visa applicants or security checks increase? Do the Departments of Homeland Security and Justice agree on whether persons who are in the country and have visas that have been revoked on terrorism concerns should be investigated and, if so, by which agency? Mr. Chairman, I would like to reiterate our two overarching areas of concern for U.S. visa policy. First, the U.S. government needs to have clear, comprehensive policies governing U.S. visa processes and procedures so that all agencies involved agree on the level of security screening for foreign nationals both at our consulates abroad and at ports of entry. These policies should balance the need for national security with the desire to facilitate legitimate travel to the United States. The Departments of State and Homeland Security should coordinate to establish governmentwide guidelines on the level of evidence needed to deny a visa. There should also be a specific policy for the interagency visa revocation process, including the actions that immigration and law enforcement agencies should take to investigate and locate individuals with revoked visas who have entered the country. The second area of concern is the continued need for coordination and information sharing among agencies. If our intelligence or law enforcement community is concerned that an individual poses a security risk, we have to make sure that this information is communicated to the State Department so that consular officers can deny and, if need be, revoke visas in a timely manner. Similarly, when State revokes a visa for terrorism concerns, we have to make sure that full information on the revocation is communicated to immigration and law enforcement agencies. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or members of the subcommittee may have. For future contacts regarding this testimony, please call Jess Ford at (202) 512-4128. Individuals making key contributions to this testimony included John Brummet, Andrea Miller, Kate Brentzel, Janey Cohen, Lynn Cothern, and Suzanne Dove. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since September 11, 2001, visa operations have played an increasingly important role in ensuring the national security of the United States. The Departments of State, Homeland Security, and Justice, as well as other agencies, are involved in the visa process. Each plays an important role in making security decisions so that potential terrorists do not enter the country. In two GAO reports, we assessed the effectiveness of the visa process as an antiterrorism tool. Our analysis of the visa process shows that the Departments of State, Homeland Security, and Justice could more effectively manage the visa process if they had clear and comprehensive policies and procedures and increased agency coordination and information sharing. In our October 2002 report on the visa process as an antiterrorism tool, we found that State did not provide clear policies on how consular officers should balance national security concerns with the desire to facilitate legitimate travel when issuing visas; and State and Justice disagreed on the evidence needed to deny a visa on terrorism grounds. In our June 2003 report, we found that State had revoked visas for terrorism concerns but that the revocation process was not being used aggressively to alert homeland security and law enforcement agencies that individuals who entered the country before their visas were revoked might be security risks; and the process broke down when information on revocations was not being shared between State and appropriate immigration and law enforcement officials. These weaknesses diminish the effectiveness of the visa process in keeping potential terrorists out of the United States. |
Several Interior agencies are responsible for carrying out the Secretary’s Indian trust responsibilities. These agencies include the Bureau of Indian Affairs (BIA) and its Office of Trust Responsibilities (OTR), which is responsible for resource management and land and lease ownership information; BIA’s 12 Area Offices and 85 Agency Offices; the Bureau of Land Management (BLM) and its lease inspection and enforcement functions; and the Minerals Management Service’s (MMS) Royalty Management Program, which collects and accounts for oil and gas royalties on Indian leases. In addition, an Office of the Special Trustee for American Indians was established by the American Indian Trust Fund Management Reform Act of 1994. This office, implemented by Secretarial Order in February 1996, has oversight responsibility over Indian trust fund and asset management programs in BIA, BLM, and MMS. The Order transferred BIA’s Office of Trust Funds Management (OTFM) to the Office of the Special Trustee for American Indians and gave the Special Trustee responsibility for the financial trust services performed at BIA’s Area and Agency Offices. At the end of fiscal year 1995, OTFM reported that Indian trust fund accounts totaled about $2.6 billion, including approximately $2.1 billion for about 1,500 tribal accounts and about $453 million for nearly 390,000 Individual Indian Money (IIM) accounts. The balances in the trust fund accounts have accumulated primarily from payments of claims; oil, gas, and coal royalties; land use agreements; and investment income. Fiscal year 1995 reported receipts to the trust accounts from these sources totaled about $1.9 billion, and disbursements from the trust accounts to tribes and individual Indians totaled about $1.7 billion. OTFM uses two primary systems to account for the Indian trust funds—an interim, core general ledger and investment system and BIA’s Integrated Resources Management System (IRMS). OTR’s realty office uses the Land Records Information System (LRIS) to record official Indian land and beneficial ownership information. BLM maintains a separate system for recording mineral lease and production information and MMS maintains separate royalty accounting and production information systems. Our assessment of BIA’s trust fund reconciliation and reporting to tribes is detailed in our May 1996 report, which covered our efforts to monitor BIA’s reconciliation project over the past 5 and one-half years. As you requested, we also assessed Interior’s trust fund management improvement initiatives. In order to do this, we contacted the Special Trustee for American Indians, OTFM officials, and OTR’s Land Records Officer for information on the status of their management improvement plans and initiatives. We also contacted tribal representatives for their views. We focused on Interior agency actions to address recommendations in our previous reports and testimonies and obtained information on new initiatives. BIA recently completed its tribal trust fund reconciliation project which involved a massive effort to locate supporting documentation and reconstruct historical trust fund transactions so that account balances could be validated. BIA provided a report package to each tribe on its reconciliation results in January, 1996. Interior’s prototype summary reconciliation report to tribes shows that BIA’s reconciliation contractor verified 218,531 of tribes’ noninvestment receipt and disbursement transactions that were recorded in the trust fund general ledger. However, despite over 5 years of effort and about $21 million in contracting fees, a total of $2.4 billion for 32,901 receipt and disbursement transactions recorded in the general ledger could not be traced to supporting documentation due to missing records. In addition, BIA’s reconciliation report package did not disclose known limitations in the scope and methodology used for the reconciliation process. For example, BIA did not disclose or discuss the procedures included in the reconciliation contract which were not performed or could not be completed. Also, BIA did not explain substantial changes in scope or procedures contained in contract modifications and issue papers, such as accounts and time periods that were not covered and alternative source documents used. Further, BIA did not disclose that the universe of leases was unknown or the extent to which substitutions were made to the lease sample originally selected for reconciliation. In order for the tribes to conclude on whether the reconciliation represents as full and complete an accounting as possible, it was important that BIA explain the limitations in reconciliation scope and methodology and the procedures specified under the original contract that were not performed or were not completed. At a February 1996 meeting in Albuquerque, New Mexico, where BIA and its reconciliation contractor summarized the reconciliation results, tribes raised questions about the adequacy and reliability of the reconciliation results. The American Indian Trust Fund Management Reform Act of 1994 required that the Secretary of the Interior report to congressional committees by May 31, 1996, including a description of the methodology used in reconciling trust fund accounts and the tribes’ conclusions as to whether the reconciliation represents as full and complete an accounting of their funds as possible. The Secretary’s May 31, 1996, report indicates that 3 tribes have disputed their account balances, 2 have accepted their account balances, and 275 tribes have not yet decided whether to accept or dispute their account balances. If Interior is not able to reach agreement with tribes on the reconciliation results, a legislated settlement process would prove useful in resolving disputes about account balances. Our March 1995 testimony suggested that the Congress consider establishing a legislated settlement process. Our September 1995 report provided draft settlement legislation for discussion purposes. The draft legislation would provide for a mediation process and, if mediation does not resolve disputes, a binding arbitration process. The proposed process draws on advice provided us by the Federal Mediation and Conciliation Service and the rules of the American Arbitration Association. Both of these organizations have extensive experience in the use of third party facilitators to provide alternative dispute resolution. The proposed process offers a number of benefits, including flexibility in presentation of evidence and, because the decision of the arbitrators would be binding and could not be appealed, a final resolution of the dispute. BIA’s reconciliation project attempted to discover any discrepancies between its accounting information and historical transactions that occurred prior to fiscal year 1993. However, unless the deficiencies in Interior’s trust fund management that allowed those discrepancies to occur are corrected, such discrepancies could continue to occur, possibly leading to a need for future reconciliation efforts. Since 1991, our testimonies and reports on BIA’s efforts to reconcile trust fund accounts have called for a comprehensive strategic plan to guide future trust fund management and ensure that trust fund accounts are accurately maintained in the future. While OTFM and OTR have undertaken a number of corrective actions, progress has been slow, results have been limited, and further actions are needed. OTFM, Interior, and OTR have initiated several trust fund management improvements during the past 3 years. These include acquiring a cadre of experienced trust fund financial management staff; issuing trust fund IIM accounting procedures to BIA field offices, developing records management procedures manuals, and issuing a trust fund loss policy; implementing an interim, core general ledger and investment accounting system and performing daily cash reconciliations; studying IIM and subsidiary system issues; reinstating annual trust fund financial statement audits; and initiating improvements to the Land Records Information System. Our 1991 testimonies and June 1992 report identified a lack of trained and experienced trust fund financial management staff. Previous studies and audits by Interior’s Inspector General and public accounting firms also identified this problem. Our June 1992 report recommended that BIA prepare an organization and staffing analysis to determine appropriate roles, responsibilities, authorities, and training and supervisory needs as a basis for sound trust fund management. In response to our recommendation, in 1992, OTFM contracted for a staffing and workload analysis and developed an organization plan to address critical trust fund management functions. The appropriations committees approved OTFM’s 1994 reorganization plan. As of October 1995, OTFM had made significant progress in hiring qualified financial management and systems staff. However, during fiscal year 1996, 27 BIA personnel displaced by BIA’s reduction-in-force were reassigned to OTFM. This represents about one-third of OTFM’s on board staff. Some of these reassigned staff displaced OTFM staff, while others filled vacant positions that would otherwise have been filled through specialized hiring. As a result, OTFM will face the challenge of providing additional supervision and training for these reassigned staff while continuing to work with BIA’s Area and Agency Office trust accountants to monitor corrective actions and plan for additional improvements. Our April 1991 testimony identified a lack of consistent, written policies and procedures for trust fund management. We recommended that BIA develop policies and procedures to ensure that trust fund balances remain accurate once the accounts are reconciled. Our April 1994 testimonyreiterated this recommendation and further recommended that BIA initiate efforts to develop complete and consistent written trust fund management policies and procedures and place a priority on their issuance. BIA has not yet developed a comprehensive set of policies and procedures for trust fund management. However, OTFM developed two volumes of trust fund IIM accounting procedures for use by BIA’s Area and Agency Office trust fund accountants and provided them to BIA’s Area and Agency Offices during 1995. Also, during 1995, OTFM developed two records management manuals, which address file improvements and records disposition. Missing records were the primary reason that many trust fund accounts could not be reconciled during BIA’s recent reconciliation effort. In addition, OTFM is developing a records management implementation plan, including an automated records inventory system. In January 1992 and again in January 1994, we reported that BIA’s trust fund loss policy did not address the need for systems and procedures to prevent and detect losses, nor did it instruct BIA staff on how to resolve losses if they occurred. The policy did not address what constitutes sufficient documentation to establish the existence of a loss, and its definition of loss did not include interest that was earned but not credited to the appropriate account. Our January 1994 report suggested a number of improvements, such as articulating steps to detect, prevent, and resolve losses. OTFM addressed our suggestions and issued a revised trust fund loss policy in 1995. However, while OTFM has made progress in developing policies and procedures, OTFM officials told us that BIA’s Area and Agency Office trust accountants have not consistently implemented these policies and procedures. In addition to developing selected policies and procedures, OTFM officials told us that they began performing monthly reconciliations of the trust fund general ledger to Treasury records in fiscal year 1993 and that they work with BIA Area and Agency Offices to ensure that unreconciled amounts are properly resolved. OTFM officials also told us that they have had limited resources to monitor Agency Office reconciliation performance and assist BIA Agency Office personnel in resolving reconciliation discrepancies. While we have not reviewed this reconciliation process, it is expected that it would be reviewed in connection with recently reinstated trust fund financial statement audits. In addition, an OTFM official told us that a lack of resources has impeded OTFM’s performance of its quality assurance function, which was established to perform internal reviews to help ensure the quality of trust fund management across BIA offices. For example, according to the OTFM official, until recently, funds were not available to travel to Area and Agency Offices to determine whether the accounting desk procedures and trust fund loss policy have been properly implemented. Our June 1992 report recommended that BIA review its current systems as a basis for determining whether systems modifications will most efficiently bring about needed improvements or whether alternatives should be considered, including cross-servicing arrangements, contracting for automated data processing services, or new systems design and development. In response to our recommendation, OTFM explored commercially available off-the-shelf trust accounting systems and contracted for an interim, core general ledger and investment accounting system. OTFM made a number of other improvements related to implementing the interim, core trust accounting system. For example, OTFM obtained Office of the Comptroller of the Currency assistance to develop core general ledger and investment accounting system operating procedures; initiated direct deposit of collections to BIA Treasury accounts through the Automated Clearing House; initiated automated payment processing, including electronic certification, to facilitate direct deposit of receipts to tribal accounts; conducted a user survey and developed a systems user guide; established a help desk to assist system users by providing information on the new system, including a remote communication package for tribal dial-in capability; and provided system access to Area and Agency Offices and tribal personnel. While the new system has eliminated the need for manual reconciliations between the general ledger and investment system and facilitates reporting and account statement preparation, tribes and Indian groups have told us that the new account statements do not provide sufficient detail for them to understand their account activity. For example, they said that because principal and interest are combined in the account statements, it is difficult to determine interest earnings. They told us that the account statements also lack information on investment yields, duration to maturity, and adequate benchmarking. For tribes that have authority to spend interest earnings, but not principal amounts, this lack of detail presents accountability problems. Representatives of some tribes told us that they either have or plan to acquire systems to fill this information gap. OTFM is planning system enhancements to separately identify principal and interest earnings. However, additional enhancements would be needed to address investment management information needs. In January 1996, the Special Trustee formed a working group consisting of tribal representatives and members of allottee associations, which represent individual Indians; BIA and Office of Special Trustee field office staff; and OTFM staff to address IIM and subsidiary accounting issues. In addition, OTFM has scheduled four consultation meetings with tribes and individual Indians between June and August 1996 to determine how best to provide customer services to IIM account holders. These groups will also consider ways to reduce the number of small, inactive IIM accounts. According to the Special Trustee, about 225,000 IIM accounts have balances of less than $10. In 1995, OTFM initiated a contract to resume audits of the trust fund financial statements. OTFM had not had a trust fund financial statement audit since 1990, pending completion of the trust fund account reconciliation project. The fiscal year 1995 audit is covering the trust fund Statement of Assets and Trust Fund Balances, and the fiscal year 1996 audit will cover the same statement and a Statement of Changes in Trust Fund Balances. In 1993, BIA’s Office of Trust Responsibility (OTR) initiated improvements to its Land Records Information System (LRIS). These improvements were to automate the chain-of-title function and result in more timely land ownership determinations. In September 1994, we reported that OTR had 2-year backlogs in ownership determinations and recordkeeping which could have a significant impact on the accuracy of trust fund accounting data. We recommended that BIA provide additional resources to reduce these backlogs, through temporary hiring or contracting, until the LRIS improvements could be completed. However, according to OTR’s Land Records Officer, the additional resources were not made available as a result of fiscal year 1995 and 1996 budget cuts. Instead, BIA eliminated 6 Land Title and Records Office positions in fiscal year 1995 and an additional 30 positions in BIA’s fiscal year 1996 reduction-in-force. As a result, OTR’s five Land Title and Records Offices and its four Title Service Offices now have a combined staff of 90 full-time equivalent (FTE) positions—compared with 126 staff on September 30, 1994—to work on the backlog in title ownership determinations and recordkeeping while also handling current ownership determination requests. While current OTR backlogs are somewhat less than in 1994, BIA’s Land Records Officer estimates that over 104 staff years of effort would be needed to eliminate the current backlog. However, because LRIS improvements are on hold, these backlogs are likely to grow. While BIA and OTFM have begun actions to address many of our past recommendations for management improvements, progress has been limited and additional improvements are needed to ensure that trust funds are accurately maintained in the future and the needs of the beneficiaries are well-served. For example, BIA’s IRMS subsidiary and IIM system may contain unverified and potentially incorrect information on land and lease ownership that some BIA offices may be using to distribute trust fund receipts to account holders. According to a BIA official, some of BIA’s Agency Office staff update IRMS ownership files based on unverified information they have developed because LRIS information is significantly out-of-date. Our September 1994 report stated that without administrative review and final determination and certification of ownerships, there is no assurance that the ownership information in BIA’s accounting system is accurate. Our report also stated that eliminating redundant systems would help to ensure that only official, certified data are used to distribute trust fund revenue to account holders. Although Interior formed a study team to develop an IIM subsidiary system plan, the team’s August 1995 report did not include a detailed systems plan. Further, BIA and OTFM have not yet performed an adequate user needs assessment; explored the costs and benefits of systems options and alternatives; or developed a systems architecture as a framework for integrating trust fund accounting, land and lease ownership, and other trust fund and asset management systems. However, even if OTR resolves its ownership determination and recordkeeping backlogs and OTFM acquires reliable IIM and subsidiary accounting systems, IIM accounting will continue to be problematic due to fractionated ownerships. Under current practices, fractionated ownerships, which result from inheritances, will continue to complicate ownership determinations, accounting, and reconciliation efforts because of the increasing number of ownership determinations and trust fund accounts that will be needed. Our April 1994 testimony stated that BIA lacked an accounts receivable system. Interior officials told us that developing an accounts receivable system would be problematic because BIA does not have a master lease file as a basis for determining its accounts receivable. As a result, BIA does not know the total number of leases that it is responsible for managing or whether it is collecting revenues from all active leases. BIA has not yet begun to plan for or develop a master lease file. In addition, BIA and OTFM have not developed a comprehensive set of trust fund management policies and procedures. Comprehensive written policies and procedures, if consistently implemented, would help to ensure proper trust fund accounting practices. Also, to encourage consistent implementation of policies and procedures, quality assurance reviews and audits are an important tool. In 1994, OTFM developed a plan to contract for investment custodian and advisor services. These initiatives were planned for implementation in fiscal year 1995. However, OTFM has delayed its contract solicitation for investment custodian services until the end of June 1996 and has only recently begun to develop a contract solicitation for investment advisors. OTFM officials told us that a lack of resources has caused them to delay contracting for these services. Since 1991, our testimonies and reports have called for Interior to develop a comprehensive strategic plan to guide trust fund management improvements across Interior agencies. We have criticized Interior’s past planning efforts as piecemeal corrective action plans which fell short of identifying the departmentwide improvements needed to ensure sound trust fund management. Our June 1992 and September 1994 reports and our April 1994 testimony recommended that Interior’s strategic plan address needed improvements across Interior agencies, including BIA, BLM, and MMS. We endorsed the American Indian Trust Fund Management Reform Act of 1994, which established a Special Trustee for American Indians reporting directly to the Secretary of the Interior. The act made the Special Trustee responsible for overseeing Indian trust fund management across these Interior agencies and required the Special Trustee to develop a comprehensive strategic plan for trust fund management. The Senate confirmed the appointment of the Special Trustee for American Indians in September 1995. In February 1996, the Special Trustee reported that the $447,000 provided for his office for fiscal year 1996 is insufficient to finance the development of a comprehensive strategic plan for trust fund financial management. Despite the funding limitations, using contractor assistance, the Special Trustee has prepared an initial assessment and strategic planning concept paper. However, the concept paper focuses on one potential system solution for addressing critical OTFM and BIA financial management information requirements and does not address other alternatives. It also does not address programs across Interior agencies or all needed improvements. In addition, the concept paper does not explain the rationale for many of the assumptions that support the detail for the $147 million estimate to implement the specified improvements. In contrast to the concept paper, a comprehensive strategic plan would reflect the requirements of the Department, BIA, BLM, MMS, OTFM, and other Interior agency Indian trust programs. It would also address the relationships of the strategic plans for each of these entities, including information resource management, policies and procedures, and automated systems. In addition, a comprehensive strategic plan would address various trust fund related systems options and alternatives and their associated costs and benefits. For example, the concept paper proposes acquiring new trust fund general ledger and subsidiary accounting systems but, unlike a strategic plan, it does not analyze the costs, benefits, advantages, and disadvantages of enhancing OTFM’s current core general ledger and investment system. Further, since 1993, OTR has been planning for LRIS upgrades, including automated chain-of-title, which would facilitate ownership determinations and recordkeeping. Because it is planned that LRIS will provide a BIA link to Interior’s core Automated Land Records Management System (ALMRS), a comprehensive strategic plan would need to consider the merits of LRIS in determining how trust ownership and accounting information needs can best be addressed. ALMRS is being developed by BLM at an estimated cost of $450 million. Because ALMRS and LRIS were costly to develop and they contain interrelated data, a comprehensive strategic plan would also need to consider the advantages and disadvantages of linking LRIS to the trust fund accounting system, as compared with acquiring a new land records and ownership system, in determining the best way to manage Indian trust funds and assets. The Special Trustee and OTFM Director told us that they currently lack the resources to adequately plan for and acquire needed trust fund system improvements. However, without accurate, up-to-date ownership and subsidiary accounting information, trust fund account statements will continue to be unreliable. The Special Trustee told us that due to limited resources and the need for timely solutions, he is considering ways to use changes in policies and procedures to deal with some trust fund problems. Many of the problems identified in his concept paper are not strictly systems problems, and they do not necessarily require systems solutions. We agree that certain changes should be considered that would not require systems solutions. For example, centralizing management functions could help resolve the problems of inconsistent ownership determinations and inconsistent accounting practices. The centralization of some functions, such as handling trust fund collections through lock box payments to banks, could also result in management efficiencies. Similarly, ownership determination and recordkeeping backlogs might be better addressed by centralizing the five Land Title and Records Offices and using contractor assistance or temporary employees until system improvements are in place. Even with centralization of some functions, customer information and services could continue to be provided locally for customer convenience. Although OTFM made a massive attempt to reconcile tribal accounts, missing records and systems limitations made a full reconciliation impossible. Also, cost considerations and the potential for missing records made individual Indian account reconciliations impractical. A legislated settlement process could be used to resolve questions about tribal account balances. Three major factors—lack of comprehensive planning, lack of management commitment across the organization, and limited resources—have impeded Interior’s progress in correcting long-standing trust fund management problems. When the trust fund reconciliation project was initiated, it was envisioned that by the time it was completed, adequate organizational structures, staffing, systems, and policies and procedures would be in place to ensure that trust fund accounts were accurately maintained in the future. However, piecemeal planning and corrective actions continue, and Interior still lacks a departmentwide strategic plan to correct trust fund management problems. In addition, while it is critical that all parts of the organization are committed to supporting and implementing trust fund management improvement initiatives, some BIA field offices are continuing to follow improper and inconsistent accounting practices. Given the continuing difficulty in managing a trust program across approximately 60 BIA offices, it is important to consider streamlining options such as centralization of collections, accounting, and land title and recordkeeping functions. Finally, Interior and BIA officials told us that they lack the resources to implement many needed corrective actions. However, the development of a comprehensive strategic plan that addresses interrelated functions and systems, identifies costs and benefits of options and alternatives, and establishes realistic milestones is a necessary first step. A departmentwide plan would provide the basis for management and congressional decisions on requests for resources. Mr. Chairman and Mr. Vice Chairman, this concludes my statement. I would be glad to answer any questions that you or the Members of the Committee might have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Department of the Interior's efforts to reconcile Indian trust fund accounts, focusing on: (1) its efforts to implement trust fund management improvements; and (2) the usefulness of a legislated settlement process for resolving unsettled account balances. GAO noted that: (1) $2.4 billion in receipt and disbursement transactions could not be traced to supporting documentation at the end of fiscal year 1995; (2) Interior did not disclose the methodology used in the reconciliation process in its reconciliation report, or discuss the extent to which substitutions were made to lease samples; (3) 2 tribes have accepted their account reconciliations, 3 tribes are disputing their reconciliation results, and the remaining 275 tribes are undecided; (4) a legislated settlement process could be used to resolve disputes concerning tribal account balances; (5) this legislation would include a mediation process, and if needed, binding arbitration; (6) Interior's trust fund management, and accounting systems controls do not ensure accurate trust fund accounting and asset management; (7) Interior will face costly reconciliations and settlements in the future if it does not correct its trust fund management problems; and (8) Interior needs comprehensive planning, management commitment across all Indian trust program offices, and additional resources to resolve trust fund management problems. |
In its role as the nation’s tax collector, IRS is responsible for collecting taxes, processing tax returns, and enforcing the nation’s tax laws. Since 1990, we have designated IRS’s enforcement of tax laws as a governmentwide high-risk area. In attempting to ensure that taxpayers fulfill their obligations, IRS is challenged on virtually every front. IRS’s enforcement workload—measured by the number of tax returns filed— has continually increased, while the number of staff dedicated to collections has not. As of September 30, 2007, IRS’s master file database of taxpayer accounts reflected about $282 billion in outstanding taxes owed by businesses and individuals. This amount understates the true cumulative amount of unpaid taxes. For example, IRS has a statutory limitation on the length of time it can pursue unpaid taxes, generally 10 years from the date of the assessment. After that period, IRS removes the tax debt from its records. Additionally, the amount of unpaid taxes is understated because many tax debts go unidentified and unrecorded on IRS’s tax records due to non- filing or underreporting of tax liabilities. These unidentified and uncollected taxes are part of IRS’s estimate of the annual tax gap. Therefore, the true cumulative amount of unpaid taxes would be far higher than $282 billion. The amount of unpaid taxes ranges from small amounts owed by individuals for a single tax period to millions of dollars owed by businesses over multiple periods. For businesses, the taxes owed include corporate income, estate, excise, and payroll taxes, as shown in figure 1. The total amount of tax debt includes interest and penalties that are added to or accumulate on the original taxes owed. Employers are required to withhold from their employees’ salaries amounts for individual federal income taxes and for Federal Insurance Contribution Act (FICA) taxes, which includes Old-Age, Survivors and Disability Insurance (Social Security) and Hospital Insurance (Medicare Part A) taxes. In 2007, the FICA taxes to be withheld consisted of 6.2 percent of an employee’s gross salary up to $97,500 for Social Security taxes and an additional 1.45 percent of the gross salary for hospital insurance. Employers are also required to match the amounts withheld from an employee’s salary for Social Security and hospital insurance taxes. Taken together, the amounts withheld from an employee’s salary for federal individual income and FICA taxes, along with the employer’s matching portion of FICA taxes, comprise the business’s payroll taxes. Employers are generally required to remit payroll taxes periodically through the Federal Tax Deposit system. The frequency of those deposits depends on the amount of taxes due and the frequency of the employer’s payroll. Employers must remit payroll taxes either (1) semiweekly if their total tax liability is more than $50,000 during a 12-month period ending June 30 of the prior year or (2) monthly if their total tax liability is $50,000 or less during this same 12-month period. The business tax liability is reported to IRS either quarterly on Form 941 or annually on Form 944. Additionally, employers are required to report employees’ earnings to the Social Security Administration annually. When a business files a tax return indicating that it owes more in payroll taxes than it has deposited, IRS records or assesses the tax liability in its systems. IRS can also identify and assess tax liabilities through its enforcement efforts, such as its examination or nonfiler programs. Once payroll tax debt is assessed and recorded in its database of unpaid taxes, IRS has a number of collection tools at its disposal to attempt to collect from tax debtors who do not voluntarily comply with the tax laws. Each case has unique aspects and therefore may require varying collection methods. However, for payroll tax cases, IRS generally follows a three- step collection process. Step 1—Notification of tax debt—Once a business fails to remit taxes owed, IRS sends the business a series of notice letters. Business tax debt typically stays in the notification phase about 15 weeks. Step 2—Assignment for collection—After tax debt leaves the notice phase, it may be placed in a queue awaiting assignment to collection personnel. If a tax debtor already has tax debt being worked on by collections personnel, it will generally bypass the queue and be assigned directly to the collection officer already working to collect the other tax debt. When a case leaves the queue and is assigned to the field for collections, it is first assigned to a manager. The manager has a waiting list of cases held for assignment to individual revenue officers. A case may be assigned to the field, but not be actively worked on because it is awaiting assignment by the manager. Step 3—Collection actions—IRS pursues collection of taxes owed either through direct contact by revenue officers in the field (referred to as the collection field function) or through calls and correspondence by IRS’s Automated Collection System (ACS). IRS’s ACS process consists primarily of telephone calls to the tax debtor through IRS’s nationwide network of call centers. ACS generally handles less complex and lower priority taxes. Because IRS has designated the collection of payroll taxes as one of its top priorities, payroll tax cases generally do not go through the ACS process. Also, although cases may move through the steps sequentially, it is not necessary that they do so. Cases begin in the notice phase, but they may enter the queue or field collection repeatedly. IRS has numerous enforcement tools that it can use when businesses fail to remit payroll taxes as required. IRS’s tools begin with a series of letters sent to the business in the notice phase to encourage voluntary compliance, which, if not accomplished, can lead to the use of increasingly more aggressive or invasive tools, including filing liens or seizing business assets, and filing for court-ordered injunctive relief. Once assigned a tax debt for collection, the revenue officer will seek to get full payment from the tax debtor. If the tax debtor is unable to pay in full, the revenue officer will seek to get the debtor to agree to a repayment plan, either an installment agreement or an offer-in-compromise. In general, the revenue officer will seek to get the tax debtor to become compliant and voluntarily pay the tax debt without IRS having to take more intrusive collection actions. In fiscal year 2007, IRS collected over $17 billion of all types of taxes from almost 3 million tax debtors through installment agreements. If, however, a tax debtor fails to agree to voluntarily pay the tax debt, the revenue officer can increase the invasiveness of their collection efforts and use its three primary tools to achieve compliance and tax collection: lien, levy, or seizure. If those are not successful at bringing a tax debtor into compliance, in certain circumstances, IRS can seek injunctive relief to close a non-compliant business or seek criminal prosecution for failing to pay payroll taxes, particularly if there are indications of fraud. An overview of each of these tools follows. Among IRS’s tools to collect outstanding taxes is its ability to use the property of a taxpayer as security for an outstanding tax debt. This is accomplished by filing a notice of federal tax lien. The lien serves to protect the interest of the federal government and as a public notice to current and potential creditors of the government’s interest in the taxpayer’s property. Although the tax lien exists under the law even before a notice is filed, the lien is perfected when IRS provides notice of its interest by filing the lien with a designated office, such as a local courthouse in the county where the taxpayer’s property is located. If the Service does not file a Notice of Federal Tax Lien (NFTL) with a state or local recording office where the taxpayer’s property is situated, the Government will have a more junior position to other creditors who have perfected their judgments or security. IRS reported filing more than 680,000 tax liens in fiscal year 2007. Since a lien encumbers taxpayer property and because federal tax liens appear on commercial credit reports, IRS’s ability to file a lien is a powerful tool in enforcing the tax laws. Filing a lien prevents the taxpayer from selling an asset, with clear title, without first paying off the outstanding tax debt. Levies are legal seizures of tax debtors’ assets to satisfy tax delinquencies. A levy is different from a lien in that a lien is a claim used as security for the tax debt, while a levy actually takes the property to satisfy the tax debt. Generally, IRS is authorized to levy property of the tax debtor in the possession of a third party, such as bank accounts, federal payments, and wages. IRS records indicate that it filed over 3.7 million levy actions against tax debtors for property held by third parties in fiscal year 2007. IRS also may seize and sell real or personal property held directly by the tax debtor, such as business assets like business equipment, cars, or paintings. However, under reforms put in place under the Internal Revenue Service Restructuring and Reform Act of 1998 (RRA), IRS cannot seize assets before determining whether the tax debtor has equity in the property subject to seizure. For example, if an asset is fully encumbered with commercial loans, IRS may not seize the asset. Although IRS records indicate that the number of actions to seize and sell assets held by the tax debtor has been steadily rising over the past several years, reaching 676 seizure actions in fiscal year 2007, the number is far below the over 10,000 seizure actions taken in 1997 prior to the enactment of RRA. In addition to actions it can take to collect unpaid taxes, IRS can also take action to attempt to stop businesses from continuing to accumulate unpaid taxes. One tool IRS has is injunctive relief. Injunctive relief is a court ordered “prohibition of an act.” If the act, or practice covered under the court order continues, the business can be found in contempt of court, and IRS can force it to cease operations. The IRM states that injunctive relief is an “extraordinary remedy” used only if previous actions have either been exhausted or it would have been futile to continue. Injunctive relief can be an important tool for IRS when businesses have no equity and therefore are impervious to seizure actions. To obtain an injunctive relief order, IRS must demonstrate to the court the (1) tax debtor’s persistent failure to comply with the law despite IRS’s repeated efforts to bring the tax debtor into compliance and (2) likelihood of future violations (i.e., the tax debtor will continue to accumulate tax debt). To gain an injunction, IRS first issues a letter to the tax debtor that includes strong language, including threats of criminal prosecution for failure to comply. The IRM notes that before seeking injunctive relief, the revenue officer should require the business to (1) file monthly employment tax returns (instead of quarterly), (2) establish a separate bank account for payroll taxes withheld, and (3) make all payroll tax deposits to that account within 2 days of paying employees. Although the willful failure to remit payroll taxes is a felony, IRS generally does not pursue a criminal prosecution unless fraud can be determined. In the past, we have reported that some IRS employees believe IRS and the District Counsel are reluctant to pursue prosecution against even egregious offenders. When businesses withhold funds from an employee’s salary for federal income taxes and the employee’s FICA obligations, they are deemed to have a fiduciary responsibility to hold these amounts “in trust” for the federal government. To the extent that the business does not forward withholdings to the federal government, it is liable for these amounts, as well as its matching FICA contribution. Officials of the business can also be held personally liable for payment of the withheld amounts. Under section 6672 of the IRC, individuals who are determined by IRS to be responsible for collecting, accounting for, and paying over payroll taxes who willfully fail to collect or pay this tax can be assessed a TFRP. To show willfulness, IRS must show that the responsible individual was aware of the outstanding taxes and either deliberately chose not to pay the taxes or recklessly disregarded an obvious risk that the taxes would not be paid. It should be noted that the deliberate intent or desire to defraud the federal government is not necessary for IRS to assess a TFRP. For example, an individual, in a business, who is responsible for collecting payroll taxes who decides to pay the business’s monthly rent payment instead of remitting employee withholdings to the federal government, can be found to be acting willfully and thus assessed a TFRP. Typically, these responsible individuals are owners or officers of a corporation, such as a president or treasurer. More than one person may be a “responsible individual” under section 6672, and thus multiple people in the business may be assessed a TFRP. The amounts assessed against each individual can vary depending on an individual’s responsibility to collect payroll taxes and the extent of the willful failure to pay over this tax for multiple periods; however, each responsible individual can be assessed a TFRP for the total amount of the withholdings not paid. Additionally, the business itself is still liable for the entire amount of the unpaid payroll taxes. However, it has long been IRS’s policy to only collect the unpaid tax once. For example, if, after IRS assesses a TFRP against an officer of a corporation, the business pays the entire balance of the unpaid payroll taxes, the officer would no longer be liable for the TFRP assessment. Similarly, if two officers are each assessed TFRPs related to their business covering the same period of unpaid payroll taxes and one of the officers makes a partial payment, the liabilities of both officers, as well as the liability of the business, are to be reduced by the amount of the payment. IRS uses the TFRP as a tool to hold owners and other officials associated with a business individually liable for the business’s failure to remit withheld payroll taxes. As such, the TFRP provides a means for IRS to seek collection from those responsible for failing to remit the withheld payroll taxes even if the business closes. The TFRP may also be used as a compliance tool to deter future non-payment of taxes by the business. TFRP assessments are also subject to the 10-year statutory collection limitation. Employers are required to withhold from their employees’ salaries amounts for both individual federal income taxes and FICA taxes, which include Social Security and Hospital Insurance taxes. While the majority of businesses pay the taxes withheld from employees’ salaries as well as the employer’s matching amounts, a significant number of businesses do not. Our review of IRS tax records showed that over 1.6 million businesses owed over $58 billion in unpaid payroll taxes to the federal government as of September 30, 2007. The failure by businesses to remit payroll taxes results in the loss of revenues to the federal government. In addition, it creates a situation in which the general revenue fund subsidizes the Social Security and Hospital Insurance trust funds to the extent that Social Security and Hospital Insurance taxes owed are not collected. Over time, the amount of this shortfall, or subsidy, is significant. IRS estimated that the General Fund has transferred to the trust funds $44 billion over what IRS collected in self employment and payroll taxes for the inventory of total unpaid taxes on record as of November 1, 2007. The estimate does not include an estimate for tax debts that have been written off of IRS’s tax records in previous years due to expiration of the statutory collection period. As a result of the failure of these businesses to pay payroll taxes, the compliant taxpayer bears an increased burden to fund the nation’s commitments. Although IRS has made the collection of unpaid payroll taxes one of its top priorities, most of the unpaid payroll tax inventory (52 percent, equal to $30 billion) was classified as currently uncollectible by IRS. While IRS has assigned about $7 billion to revenue officers for collection, about $9 billion of unpaid payroll taxes are in a queue awaiting assignment. Our analysis of the unpaid payroll tax inventory shows that the number of businesses with more than 20 quarters of tax debt (5 years of unpaid payroll tax debt) almost doubled between 1998 and 2007. Because IRS is statutorily limited in the length of time it has to collect unpaid taxes—generally 10 years from the date the tax debt is assessed— the federal government will lose its right to collect billions of dollars in payroll taxes each year if IRS does not obtain payment from tax debtors before the statutory period for collection expires. Of the $282 billion in cumulative, identified, unpaid taxes owed to the federal government as of September 30, 2007, IRS records show that over $58 billion (over 20 percent) is owed for unpaid payroll taxes. This total includes amounts, earned by employees, that were withheld from their salaries to satisfy their tax obligations, as well as the employers’ matching amounts, but which the business diverted for other purposes. Over 1.6 million businesses have unpaid payroll tax debt. Many of these businesses repeatedly failed to remit amounts withheld from employees’ salaries. For example, 70 percent of all unpaid payroll taxes are owed by businesses with more than a year (4 tax quarters) of unpaid payroll taxes, and over a quarter of unpaid payroll taxes are owed by businesses that have tax debt for more than 3 years (12 tax quarters). Figure 2 shows the total dollar amount of payroll tax debt summarized by the number of unpaid payroll tax quarters outstanding. Much of the unpaid payroll tax debt has been outstanding for several years. As reflected in figure 3, our analysis of IRS records shows that over 60 percent of the unpaid payroll taxes was owed for tax periods from 2002 and prior years. Prompt collection action is vital because, as our previous work has shown, as unpaid taxes age, the likelihood of collecting all or a portion of the amount owed decreases. Further, the continued accrual of interest and penalties on the outstanding federal taxes can, over time, eclipse the original tax obligation. Figure 4 shows that over half of the unpaid payroll taxes owed is for interest and penalties on the original tax debt. Using IRS’s database of unpaid taxes, we were able to identify many of the industry types associated with businesses owing payroll taxes. Figure 5 presents the major industries with outstanding unpaid payroll taxes according to IRS records. When businesses fail to remit taxes withheld from employees’ salaries, the payroll tax receipts are then less than the payroll taxes due, and the Social Security and Hospital Insurance trust funds will have less financial resources available to cover current and future benefit payments. However, the trust funds are funded based on wage estimates and not actual payroll tax collections. Therefore, the General Fund transfers to the trust funds amounts that should be collected but are not necessarily collected, resulting in the General Fund subsidizing the trust funds for amounts IRS is unable to collect. As of November 1, 2007, IRS estimated that the amount of unpaid taxes and interest attributable to Social Security and hospital insurance taxes in IRS’s $282 billion unpaid assessments balance was approximately $44 billion. This estimate represents a snapshot of the amount that needed to be provided to the Social Security and Hospital Insurance trust funds based on the outstanding tax debt on IRS’s books at the time. It does not include an estimate for tax debts that have been written off of IRS’s tax records in previous years due to expiration of the statutory collection period. Recent IRS data indicate that the shortfall is about $2 billion to $4 billion annually due to uncollected payroll taxes. Of the $58 billion in unpaid payroll taxes as of September 30, 2007, IRS categorized about $4 billion as going through IRS’s initial notification process. The notification process results in significant collections, particularly with respect to generally compliant taxpayers who respond to the notices by paying off the outstanding taxes owed or entering into installment agreements to pay off the tax debt over time. IRS records indicate that over half of all unpaid tax collections result from the notification process. Because IRS has made the collection of payroll taxes one of its highest priorities, once a case completes the notification process, it is generally sent to IRS’s field collections staff for face-to-face collection action. However, IRS does not have sufficient resources to immediately begin collection actions against all of its high-priority cases. As a result, IRS holds a large number of cases in a queue awaiting assignment. Of the $54 billion in unpaid payroll taxes that had completed the notification process, about $7 billion was being worked on by IRS revenue officers for collection and about $9 billion was in a queue awaiting assignment for collection action. Most of the unpaid payroll tax inventory was classified as currently uncollectible by IRS. As shown in figure 6, IRS considered $30 billion—52 percent of all payroll tax debt—to be currently not collectible. IRS classifies tax debt cases as currently not collectible for several reasons, including (1) the business owing the taxes is defunct, (2) the business is insolvent after bankruptcy, or (3) the business is experiencing financial hardship. As shown in figure 7, of those unpaid payroll tax cases IRS has classified as currently not collectible, almost two-thirds were as a result of a business being defunct. Although IRS has taken a number of steps to improve collections by prioritizing cases with better potential for collectibility, the collection of payroll taxes remains a significant problem for IRS. From 1998, when we performed our last in-depth review of payroll taxes, to September 2007, we found that while the number of businesses with payroll tax debt decreased from 1.8 million to 1.6 million, the balance of outstanding payroll taxes in IRS’s inventory of tax debt increased from about $49 billion to $58 billion. Our analysis of the unpaid payroll tax inventory shows that the number of businesses with more than 20 quarters of tax debt (5 years of unpaid payroll tax debt) almost doubled between 1998 and 2007, from just over 5,000 businesses in 1998 to over 10,000 as of September 30, 2007. The number of businesses that had not paid payroll taxes for over 40 quarters (10 years or more) during this period also almost doubled, from 86 businesses to 169 businesses. These figures are shown in table 1. As discussed previously, IRS is statutorily limited in the length of time it has to collect unpaid taxes—generally 10 years from the date the tax debt is assessed. Once that statutory period expires, IRS can no longer attempt to collect the tax. IRS records indicate that over $4 billion of unpaid payroll taxes will expire in each of the next several years due to this statutory period. Figure 8 shows the amount of unpaid payroll taxes that will statutorily expire and be written off by IRS over the next several years if IRS is unable to collect the taxes. As figure 8 indicates, the federal government will lose its right to collect billions of dollars in payroll taxes each year if IRS does not obtain payment from tax debtors before the statutory period for collection expires. Our audit of payroll tax cases identified several issues that adversely affect IRS’s ability to prevent the accumulation of unpaid payroll taxes and to collect these taxes. Foremost is that IRS’s approach focuses on getting businesses—even those with dozens of quarters of payroll tax debt—to voluntarily comply. We found IRS often either did not use certain collection tools, such as liens or TFRPs, or did not use them timely, and that IRS’s approach does not treat the business’s unpaid payroll taxes and responsible party’s penalty assessments as a single collection effort. Additionally, although unpaid payroll taxes are one of their top collection priorities, IRS did not have performance measures to evaluate the collection of unpaid payroll taxes or the related TFRP assessment. Finally, we found some state revenue agencies are using tools to collect or prevent the further accumulation of unpaid taxes that IRS is either legally precluded from using or which it has not yet developed. As discussed previously, IRS has a number of powerful tools at its disposal to help prevent the accumulation of unpaid taxes and to collect the taxes that are owed. Those tools include the ability to file liens on a tax debtor’s property, levy available funds from bank accounts and other financial sources, and seize and sell property owned by the tax debtor to help satisfy the tax debt. However, even with such tools, we found that some businesses continued to accumulate payroll tax debt for dozens of tax quarters. This is partly because IRS’s approach to collection focuses first on gaining voluntary compliance, even for more egregious payroll tax offenders. IRS acknowledges that in some instances its collection methods do not bring taxpayers into compliance. We have previously reported that IRS subordinates the use of some of its collection tools in order to seek voluntary compliance, and that IRS’s repeated attempts to gain voluntary compliance often results in minimal or no actual collections. Our audit of businesses with payroll tax debt and our analysis of businesses with multiple quarters of unpaid payroll taxes again found revenue officers continuing to work with a business to gain voluntary compliance while the business continued to accumulate unpaid payroll taxes. As discussed earlier, our analysis of IRS’s inventory of unpaid payroll taxes found that over 10,000 businesses owed payroll taxes for 20 or more quarters—5 years or more. One of our case studies illustrates the extent to which unpaid payroll taxes can accumulate using a voluntary compliance approach for unpaid payroll taxes. In this case, the business was opened in 1994, after its owner closed a similar business that owed payroll taxes. From its inception, the case study business was not compliant with tax laws, making some tax payments, but not filing any of the required tax returns. In July 1999, IRS identified that the business was not filing its required payroll tax returns and assigned the case to a revenue officer for investigation. After working with the business for 5 months, the revenue officer secured 22 quarters of delinquent payroll tax returns. Those returns indicated a total tax debt, including interest and penalties, of almost $500,000. In March 2000, the business requested to be put on an installment agreement to repay over time the known outstanding taxes it owed. However, the business was not eligible for an installment agreement because it was not compliant with its filing requirements. The revenue officer worked with the business for another 9 months attempting to obtain the financial information needed to initiate an installment agreement. Meanwhile, the business continued to accumulate unpaid payroll tax debt of about $20,000 each quarter. The revenue officer continued to work with the business to gain voluntary compliance, but the business did not provide the needed financial information until the revenue officer filed levies against the business’s known bank accounts in early 2001. The levies resulted in collections of less than $5,000 toward the unpaid tax debt. After 2-1/2 more years, in August 2003, the revenue officer noted that, though IRS had been seeking compliance for several years, the business was still not compliant with filing requirements, had not provided current financial information, and was generally unresponsive. Although the revenue officer continued to obtain some delinquent tax returns and some payroll tax payments as a result of the officer’s efforts, the business continued to accumulate additional tax debt. As of July 2007, the business had accumulated payroll taxes from over 30 quarters totaling almost $1 million, and other taxes, including business income taxes, of almost $400,000. Those unpaid taxes stretch back to the inception of the business in 1994. Additionally, the business has not filed required payroll tax returns since the fourth quarter of 2004—potentially accruing a quarter million dollars in additional unpaid payroll tax debt. Failing to take more aggressive collection actions against businesses that repeatedly fail to remit payroll taxes has a broader impact than on just a single business. If left to accumulate unpaid payroll taxes, businesses gain an unfair business advantage over their competitors at the expense of the government. As we have found previously, in at least one of our case study businesses, IRS determined that the non-compliant business obtained contracts through its ability to undercut competitors due in part to the business’s reduced costs associated with its non-payment of payroll taxes. Similarly, in another case the revenue officer noted that the business was underbidding on contracts and was using unpaid payroll taxes to offset the business’s losses. Failure to take prompt actions to prevent the further accumulation of unpaid payroll taxes can also have a detrimental impact on the business and the associated owners/officers. As we have reported in the past, non- compliant businesses can accumulate substantial unpaid taxes as well as associated interest and penalties. Over time, these unpaid balances may compound beyond the business’s ability to pay—ultimately placing the business and responsible officers in greater financial jeopardy. It should be noted that IRS is legally precluded from taking collection actions during certain periods, such as when a tax debtor is involved in bankruptcy proceedings. During those periods, even though IRS may not be able to take collection actions, tax debtors may continue to accumulate additional tax debt. However, IRS’s focus on voluntary compliance has negatively affected IRS’s collection efforts for years. Our current findings on IRS’s focus on voluntary compliance are similar to those of the Treasury Inspector General for Tax Administration (TIGTA) in a study from 8 years ago. In its 2000 study, TIGTA found that revenue officers were focused on IRS’s customer service goals and therefore were reluctant to take enforcement actions. As a result, they continued to work with tax debtors to gain voluntary payment rather than using more aggressive enforcement tools such as levies or seizures. TIGTA found that in 116 cases they reviewed, revenue officers did not file a lien, issue a summons, or levy or seize assets in almost a third of the cases. Revenue officers considered seizing assets in just 3 of the 116 cases, but actually seized assets in just 1 case. TIGTA also reported that as a result of IRS not taking effective collection actions, the cases (while under review by TIGTA) accrued more unpaid taxes while assigned to revenue officers than the revenue officers were able to collect. Again in 2005, TIGTA reported that IRS allowed tax debtors to continue to delay taking action on their tax debt by failing to take aggressive collection actions. TIGTA found that IRS did not take timely follow-up action for half of the cases for which tax debtors missed specific deadlines. IRS has recently strengthened its IRM to include some specific steps for dealing with businesses that repeatedly fail to remit payroll taxes and to stress the importance of preventing the further accumulation of unpaid payroll taxes. The revised IRM advises revenue officers to take all appropriate remedies to bring the tax debtor into compliance and that they should consider seizing assets and pursuing TFRP assessments against responsible parties. It is important for IRS to support taxpayers in remaining compliant and to facilitate businesses becoming compliant; however, having a primary focus on voluntary compliance can lead to delays in taking stronger actions against flagrant tax debtors who refuse to comply with the tax laws and accumulate dozens of quarters of payroll tax debt. Having a reticence to use enforcement tools may, over time, actually diminish voluntary compliance and collections. IRS’s guidance states that businesses that fail to comply with the tax law jeopardize the public perception of tax enforcement, which has a detrimental effect both on compliance and collections. One official from a state taxing authority told us that the state benefited from IRS’s approach because it allowed the state to collect its unpaid taxes from business tax debtors before IRS. In one of our case study businesses, although IRS successfully levied some financial assets, a mortgage holder and state and local officials seized the business’s assets to satisfy the business’s debts. In another case, IRS did not seize assets, but received some collections because local officials seized and sold the business owner’s house. We noted this issue in our previous report on DOD contractors with tax debt. In reviewing specific collection actions taken by IRS, we found that revenue officers often did not timely take basic steps to protect the government’s interest in a tax debtor’s property by filing a lien or to hold the business’s owners and officers personally responsible for willfully failing to remit withheld payroll taxes. Our analysis indicated that IRS had not filed a lien to protect the government’s interest in a business property in over 30 percent of all payroll tax cases assigned to the field for collection effort. Additionally, our review of recent IRS actions to assess TFRPs against owners/officers of businesses with payroll tax debt found that revenue officers took 40 weeks on average to determine that a TFRP should be assessed and an additional 40 weeks on average to actually assess the penalty. Failure to take timely action to file liens or assess TFRPs has been a long- standing problem. In 2005, TIGTA reported that IRS’s revenue officers often failed to take timely collection actions on payroll tax cases and concluded that not taking timely and aggressive collection actions on cases allowed businesses to continue to accumulate unpaid payroll taxes. IRS’s own analysis of TFRP assessments, also done in 2005, found that less than half of all TFRP cases had a lien filed to protect the interest of the government. Our audit found that for payroll tax debt, one of its highest collection priorities, IRS does not always file liens to protect the government’s interest in property and, when it does so, it does not always do so timely. Our analysis of IRS’s inventory of unpaid payroll taxes as of September 30, 2007, found that IRS had not filed liens on over one-third of all businesses with payroll tax debt cases assigned to the field for collection efforts – over 140,000 businesses. IRS guidance states that filing a lien is extremely important to protect the interests of the federal government, creditors, and taxpayers in general, and that the failure to file and properly record a federal tax lien may jeopardize the federal government’s priority right against other creditors. The ability to file a tax lien in the public records is a powerful tool for IRS. The lien appears on credit reports for both individuals and businesses and can stay there for approximately 10 years. For an individual, the presence of a tax lien can make it more difficult to obtain credit, in turn making it more difficult to buy a home, rent an apartment, or buy a car. Tax debtors that are able to get credit may have to pay higher credit rates. For businesses, the presence of a tax lien can result in a creditor no longer shipping inventory unless paid for by cash and banks withdrawing lines of credit. This can ultimately cause businesses to fail. Lien filing may also increase the likelihood of collection by IRS. The 2005 IRS study of TFRP cases found that cases where a lien had been filed had more average payments—about a third more—than where a lien had not been filed. Although the IRM does not explicitly state that liens should be filed, it does emphasize the need to do so to protect the interest of the federal government. Because businesses may be highly leveraged or have few tangible assets, the filing of a lien may not always be advantageous to the government; other situations may also make it counterproductive to file a lien. The IRM does allow revenue officers to not file a lien in order to allow a business to obtain a loan or to otherwise continue operating so that the business may become compliant and pay the past due tax debt. However, failure to file a lien can have a negative impact on tax collections. For example, IRS assessed the business owner in one of our case studies a TFRP to hold the owner personally liable for the withheld payroll taxes owed by the business. However, IRS did not assign the assessment to a revenue officer for collection, and thus did not file a Notice of Federal Tax Lien on the owner’s property. Because there was no lien filed, the owner was able to sell a vacation home in Florida and IRS did not collect any of the unpaid taxes from the proceeds of the sale. As in the case above, IRS’s case assignment policy can delay the filing of liens for payroll tax cases. Because payroll tax cases are one of IRS’s top collection priorities, once the notification process is complete, IRS bypasses its ACS process and routes these cases to revenue officers for collection. However, IRS generally must place cases in a queue until a revenue officer is available to work the cases. Cases can be in the queue for extended periods of time awaiting assignment. For the period that a case is in the queue, revenue officers are not assigned to file liens and take other collection actions. Our analysis found that for the $9 billion of payroll tax cases in the queue awaiting assignment as of September 30, 2007, over 80 percent of the cases did not have a lien filed. As a result, lower priority tax cases that go through the ACS process may have liens filed faster than the higher priority payroll tax cases. IRS has been aware of this issue. Its own study in 2005 found less than half of payroll tax cases in which IRS assessed the business owner or officer a TFRP had a lien filed to protect the interest of the government, and only 27 percent of TFRP assessments that were under a year old had a lien filed. As the previously discussed case study illustrates, the timeliness of lien filing is critical in such cases to protect the government’s interest in the owner’s personal property and to encourage the owners/officers to make the business compliant. IRS is taking some steps to address these issues. For example, IRS is investigating the feasibility of routing payroll tax cases that might otherwise be sent to the queue through the ACS process to have a lien filed. Additionally, in recent years IRS has begun to put in the IRM timeliness guidelines for the use of certain collection tools, including lien filings. The IRM now calls for revenue officers to make a determination to file a lien within 10 days of initial contact. These are positive steps which could help improve the timeliness of IRS’s lien filings in the future. However, while not all cases warrant having a lien filed, our analysis has shown that, overall, 60 percent of all unpaid payroll tax cases currently in IRS’s inventory do not have a lien filed to protect the government’s interest in tax debtors’ property. Although IRS has a powerful tool to hold responsible owners and officers personally liable for unpaid payroll taxes through assessing a TFRP, we found that IRS often takes a long time to determine whether to hold the owners/officers of businesses personally liable and, once the decision is made, to actually assess penalties against them for the taxes. In reviewing the sample of TFRP assessments selected as part of our audit of IRS’s fiscal year 2007 financial statements, we found that from the time the tax debt was assessed against the business, IRS took over 2 years, on average, to assess a TFRP against the business owners/officers. We found that revenue officers, once assigned to a payroll tax case, took an average of over 40 weeks to decide whether to pursue a TFRP against business owners/officers and an additional 40 weeks on average to formally assess the TFRP. For 5 of the 76 sampled cases, IRS took over 4 years to assess the TFRP. We did not attempt to identify how frequently IRS assesses a TFRP against responsible owners/officers. However, in TIGTA’s 2005 report on its review of IRS’s collection field function, it noted that for cases where a TFRP was applicable, revenue officers did not initiate or conduct the interview to begin the TFRP process in over a quarter of the cases TIGTA reviewed. The timely assessment of TFRPs is an important tool in IRS’s ability to prevent the continued accumulation of unpaid payroll taxes and to collect these taxes. Once a TFRP is assessed, IRS can take action against both the owners/officers and the business to collect the withheld taxes. For egregious cases, such as some of those in our case studies, taking strong collection actions against the owners’ personal assets may be the best way to either get the business to become tax compliant or to convince the owners to close the business, thus preventing the further accumulation of unpaid taxes. Failure to timely assess a TFRP can result in businesses continuing to accumulate unpaid payroll taxes and lost opportunities to collect these taxes from the owners/officers of the businesses. For example, one business had tax debt from 2000, but IRS did not assess a TFRP against the business’s owner until the end of 2004. In the meantime, the owner was drawing an annual salary of about $300,000 and had sold property valued at over $800,000. Within 1 month of IRS assessing the TFRP, the owner closed the business, which by then had accumulated about $3 million in unpaid taxes. Lack of timeliness in assessing TFRPs has been a long-standing problem for IRS. Our annual audit of IRS’s financial statements in the late 1990’s identified this problem and we made recommendations for IRS to analyze and determine the factors causing delays in both processing and recording TFRP assessments. Although IRS has taken many steps to improve the timeliness of TFRP assessments, such as centralizing TFRP assessment processing and implementing a new Web-based application, these actions have not been fully effective in resolving this issue. During our audit of IRS’s fiscal year 2007 financial statements, we continued to find long delays in IRS’s processing and posting of TFRP assessments. For most of the time our case study businesses were being worked on by revenue officers, the IRM required them to make a determination of whether to pursue a TFRP assessment within 180 days—about 26 weeks. However, the IRM was silent about how long it should take to actually assess the TFRP once revenue officers determined that the failure by the responsible individuals to remit payroll taxes was willful. Additionally, although IRS had a 180-day requirement to make a determination, revenue officers could make the determination to delay the assessment, thus making a timely determination while still not moving forward to formally assess the TFRP against the responsible individuals. In September 2007, IRS implemented new IRM requirements to address the timeliness of TFRP assessments. Under the new policy, revenue officers are now required to make the determination as to whether to pursue a TFRP within 120 days of the case being assigned and to complete the assessment within 120 days of the determination. However, the revised IRM maintains the provision to allow the revenue officer, with manager authorization, to delay the TFRP determination. Additionally, the IRM does not include a requirement for IRS to monitor the new IRM standards for assessing TFRPs. IRS assigns a higher priority to collection efforts against the business with unpaid payroll taxes than against the business’s responsible owners/officers. Further, it treats the TFRP assessments as a separate collection effort unrelated to the business tax debt, even though the business payroll tax liabilities and the TFRP assessments are essentially the same tax debt. As a result, once the revenue officer assigned to the business payroll tax case decides to pursue a TFRP against the responsible owners/officers, the TFRP case does not automatically remain with this revenue officer. Accordingly, IRS often does not assign the TFRP assessment to a revenue officer for collection, and when it does, it may not assign it to the same revenue officer that is responsible for collecting unpaid taxes from the business. In reviewing the sample of TFRP assessments selected as part of our audit of IRS’s fiscal year 2007 financial statements, we found that half of the TFRP assessments had not been assigned to a revenue officer by the time of our audit. Of those that had been assigned, over half of the TFRP assessments had not been assigned to the same revenue officer that was working the related business case. Assigning the collection efforts against the business and the TFRP assessments to different revenue officers can result in the responsible owners/officers being able to continue to use the business to fund a personal lifestyle while not remitting payroll taxes. For example, in one of our case studies the owner was assessed a TFRP, but continued to draw a six-figure income while not remitting amounts withheld from the salaries of the business’s employees. In contrast, having either a single revenue officer assigned or coordinating the efforts of multiple revenue officers could provide IRS with several advantages, including the following: For egregious cases, taking strong collection actions against the owner’s personal assets may be a more effective means of either getting the business to be compliant or convincing the owner to close the unprofitable business to prevent the further accumulation of unpaid payroll taxes. Assigning a single revenue officer could expedite the assignment of TFRP assessments and collection efforts against those cases. For example, one of our case study businesses was assessed a TFRP, but since the TFRP had a lower priority, it was sent to the queue. Because the case had not been assigned, IRS did not file a tax lien on the owner of the business and thus the assessment of the TFRP had very little impact. Additionally, since IRS has a statutory time limitation to collect against a tax debt, this owner was almost half-way through the statutory period before the case was ever worked on. Assigning a single revenue officer could help improve IRS’s ability to ensure assessments are made, transaction codes are input, and collections are properly posted against trust fund amounts to all related parties, a long-standing problem identified as a part of our financial statement audits. IRS collection officials said the agency categorizes the unpaid payroll tax debt of the business as a high priority to ensure that higher-level revenue officers are assigned mainly to the more complex business cases. IRS may also assign the business payroll tax debt and the TFRP assessment to different collection officials because the business and the responsible owners/officers are not located in the same zip code area. For example, if an officer is in a different state than the business, the collection efforts would be handled by separate officials to facilitate face-to-face collection efforts and to allow the revenue officer to physically go to courthouses to perform property searches. IRS collection officials also stated that attempting to assign the same revenue officer both the TFRP assessments and the business payroll tax case for collection would overload the revenue officers with work and result in fewer high-priority payroll tax cases being worked on. This view, however, stems from separating the collection efforts of the business and the individual and not considering the business’s unpaid payroll taxes and the TFRP assessment as a single case. In essence, the TFRP assessment is the same tax debt as the business’s payroll tax debt; the assessment is merely another means through which IRS can attempt to collect the monies withheld from a business’s employees for income, Social Security, and hospital insurance taxes that were not remitted to the government. This view that the payroll tax debt and the TFRP assessment are essentially the same tax debt is reinforced by IRS’s own practice of crediting all related parties’ accounts whenever a collection is made against either assessment. Prior studies have found that IRS’s practice of assigning TFRP assessments a lower priority than business cases has not been very successful for collecting the unpaid taxes. In its own August 2005 study, IRS reported that it had assessed over $11.9 billion in TFRP assessments (including interest) between 1996 and 2004, yet had collected only 8 percent of those assessments. IRS reported that for those assessments made in 1996, for which IRS had been attempting collection for at least 8 years, the collection rate was only 13 percent. For all responsible owners/officers that were assessed a TFRP, 43 percent never made a payment on their trust fund penalty. IRS reported that of those TFRP assessments that had been resolved, almost half were resolved in the first year of the assessment, and almost 93 percent were resolved in the first 4 years. IRS policies have not resulted in effective steps being taken against egregious businesses to prevent the further accumulation of unpaid payroll taxes. Our audit found thousands of businesses that had accumulated more than a dozen tax quarters of unpaid payroll tax debt. The IRM states that revenue officers must stop businesses from accumulating payroll tax debt, and instructs revenue officers to use all appropriate remedies to bring the tax debtor into compliance and to immediately stop any further accumulation of unpaid taxes. It further states that if routine case actions have not stopped the continued accumulation of unpaid payroll taxes, revenue officers should consider seizing the business’s assets or pursuing a TFRP against the responsible parties. However, IRS successfully pursued less than 700 seizure actions in fiscal year 2007. We were unable to determine how many of those seizure actions were taken against payroll tax debtors. Regarding TFRPs, as discussed previously, IRS does not always assess the TFRPs timely and IRS does not prioritize the TFRP assessment against the owner as highly as it does the business payroll taxes. This can result in little collection action being taken against the parties responsible for the failure to remit withheld payroll taxes. When a business repeatedly fails to comply after attempts to collect, the IRM states that the business should be considered an egregious offender and IRS should take aggressive collection actions, including threats of legal action that can culminate in court-ordered injunctions for the business to stop accumulating unpaid payroll taxes or face business closure. However, IRS obtained less than 10 injunctions in fiscal year 2007 to stop businesses from accumulating additional payroll taxes. Revenue officers we spoke to believe the injunctive relief process to be too cumbersome to use effectively in its present form. One revenue officer stated that because of the difficulty in carrying out the administrative and judicial process to close a business through injunctive relief, he had not attempted to take such action in over a decade. We have reported in the past that the U.S. Attorney’s Office and the District Counsel prefer not to seek such injunctions due to the time and expense required to prosecute these cases. IRS is taking some action to attempt to address this issue by piloting a Streamline Injunctive Relief Team to identify cases and develop procedures to quickly move a case from administrative procedures to judicial actions. These procedures will be used for the most egregious taxpayers when the revenue officer can establish that additional administrative procedures would be futile. Similar to IRS, all of the state tax collection officials we contacted told us that their revenue department’s primary goal was to prevent businesses from continuing to flaunt tax laws and to stop them from accumulating additional tax debt. They said that after a business had been given a period of time to comply with its current tax obligations and begin paying past taxes, state tax collection officials changed their focus to one of “stopping the bleeding.” As such, some have made the policy decision to seek to close non-compliant businesses, as discussed in the following two examples. One Georgia state official we spoke to said the state had passed laws to allow businesses to be closed through administrative procedures within the department of revenue without judicial intervention. The procedure is tied to the state’s ability to seize the assets of the business. The state may seize the assets of businesses that do not comply with their tax obligations as a means of closing the business to prevent the further accumulation of unpaid taxes, even if the sale of those assets do not result in collections to reduce the business’s current tax debt. The official we spoke to stated that it is a routine part of the state’s collection arsenal and the state closed several dozen businesses this way in 2007 to prevent the further accumulation of unpaid trust fund taxes. Kentucky developed a procedure to close businesses that does not involve the seizure of the business’s assets. That state centralized the judicial proceedings for closing a business in a single court that is experienced in tax-related injunctions and therefore is willing and able to move through the process quickly. One official told us the state closed about 100 businesses a month through such proceedings to prevent the further accumulation of unpaid payroll tax debt. To the extent IRS is not taking effective steps to deal with egregious payroll tax offenders that repeatedly fail to comply with the tax laws, businesses may continue to withhold taxes from employees’ salaries but divert the funds for other purposes. Although IRS has made the collection of unpaid payroll taxes one of its top priorities, IRS has not established goals or measures to assess its progress in collecting or preventing the accumulation of payroll tax debt. Performance measurement and monitoring supports resource allocation and other policy decisions to improve an agency’s operations and the effectiveness of its approach. Performance monitoring can also help an agency by measuring the level of activity (process), the number of actions taken (outputs), or the results of the actions taken (outcomes). Although IRS does have a broad array of operational management information available to it, we did not identify any specific performance measures associated with payroll taxes or TFRP assessments. IRS has caseload and other workload reports for local managers (to measure process and outputs); however, these localized reports are not rolled up to a national level to allow IRS managers to monitor the effectiveness or efficiency of its collection and enforcement efforts. Additionally, these operational reports do contain information about unpaid payroll tax and TFRP case assignments, but rather are used primarily to monitor workload issues, not program effectiveness. For example, IRS has developed some reports that identify “over-aged” cases (those that have not been resolved within a certain length of time), and to identify businesses that continue to accrue additional payroll tax debt, but those reports are designed for workload management. To report on its outcomes or the effectiveness of its operations, IRS reports on overall collection statistics and presents that information in the Management Discussion and Analysis accompanying its annual financial statement and in its IRS Data Book. However, IRS does not specifically address unpaid payroll taxes as a part of those discussions. IRS officials stated that they do not have specific lower-level performance measures that target collection actions or collection results for unpaid payroll taxes or TFRP assessments. Such performance measures could be useful to assist IRS in measuring the success of its efforts to collect or prevent the further accumulation of unpaid payroll taxes and to formulate more effective approaches to dealing with this compliance issue. In our discussions with IRS revenue officers concerning some of the egregious payroll tax offenders included in our case studies, they noted that having certain additional tools available to them could allow them to more effectively deal with recalcitrant businesses. Those tools include (1) the ability to publish the names of tax debtors and (2) improved methods of identifying business assets for levy. Revenue officers stated, and we acknowledge, that IRS faces challenges in balancing voluntary compliance with the need to enforce the tax laws. Many businesses have accumulated dozens of tax quarters worth of payroll tax debt, sometimes accumulating over a million dollars in unpaid payroll taxes. In those egregious situations, including many of our case studies, IRS’s policy to encourage voluntary compliance and use of available collection tools neither resulted in the collection of the unpaid portion nor prevented the further accumulation of more unpaid payroll taxes. As part of our audit, we spoke with a number of state revenue department officials to identify specific collection approaches and tools used by those states to pursue payment of unpaid taxes. We found that several states had already developed and were effectively using the types of tools IRS revenue officers said would be beneficial to them. The IRC generally prohibits IRS from publicly disclosing federal tax information without taxpayer consent. Although IRS tax liens are public information, IRS does not centrally publish its lien filings or otherwise make available information about businesses or individuals with tax debt. However, during our discussions, IRS officials told us that being able to do so could increase IRS’s ability to collect payroll tax debts. In contrast, an increasing number of states—at least 19 including New Jersey, Connecticut, Indiana, and California—are seeking to increase tax collections by publicizing the names of those with delinquent tax bills. For example, a recent California law mandates the state to publish each year the names of the top 250 personal and corporate state tax debtors with at least $100,000 in state tax debt. The list does not include those who are fighting the tax bills in courts, have sought bankruptcy protection, or have set up payment plans with the state. Public disclosure of tax debtors can be very effective. Just threatening to publish the names of tax offenders can bring some into compliance, while actually appearing on a tax offender list can bring about societal pressure to comply. For example, in California 26 tax debtors threatened with public disclosure stepped forward to settle their tax debts and thus avoided appearing on the list. In Connecticut, the state claims the public disclosure of tax debtors has resulted in over $100 million in collections from the first 4 years of the program. The potential public disclosure of tax debtors may also encourage greater tax compliance among the general population of taxpayers to avoid potentially being on the list. As discussed previously, IRS has the authority to levy a tax debtor’s income and assets when there is a demand for payment and there has been a refusal or an inability to pay by the taxpayer subject to the levy. Although IRS has this authority, IRS officials stated that they often have difficulty using levies to collect unpaid payroll taxes because, for example, the levy may be made against funds in a bank account at a certain point in time when little or no funds are available. Additionally, IRS officials told us, and in our case studies we found, that IRS sometimes has difficulty identifying which banks or financial institutions a tax debtor is using. This is the case because tax debtors will often change financial institutions to avoid IRS levies. Once a levy is served against an account, a tax debtor will often close the account and open an account in a different financial institution. IRS must then search for where the tax debtor is now doing business and attempt to serve a new levy. One IRS official stated that IRS may serve levies on multiple banks while searching for the new accounts. Such a process of searching for accounts is very time consuming for both the revenue officers and the financial institutions being served the levies and is a burden to these financial institutions. Several states use legal authorities to assist in identifying levy sources. States such as Kentucky, Maryland, Massachusetts, Indiana, and New Jersey have enacted legislation for matching programs or entered into agreements with financial institutions to participate in matching bank account information against state tax debts. This matching allows states to more easily identify potential levy sources and simplifies the financial institution’s obligations to respond to multiple levies. IRS is currently working with at least one state to investigate the potential for this matching, but in our discussions with IRS collection officials, they stated that IRS has not sought legislation or agreements with financial institutions to enhance its levying powers. Our analysis of unpaid payroll tax debt found substantial evidence of abusive and potentially criminal activity related to the federal tax system by businesses and their owners or officers. As noted, over 1.6 million businesses owe unpaid payroll taxes. We identified tens of thousands of businesses that filed 10 or more tax returns acknowledging that the business owed payroll taxes, yet failed to remit those taxes to the government. While much of the tax debt may be owed by those with little ability to pay, some abuse the tax system, willfully diverting amounts withheld from their employees’ salaries to fund their business operations or their own personal lifestyles. In addition to owing payroll taxes for multiple tax periods and accumulating tax debt for years, many of the owners and officers of these businesses are repeat offenders. We identified owners who were involved in multiple businesses, all of which failed to remit payroll taxes as required. For example, in one of our case studies in which a business owed almost $2.5 million, the owner was involved in multiple other businesses, all of which owed unpaid payroll taxes. IRS records indicated that the owner was also underreporting personal income to avoid paying personal income taxes. Additionally, the owner was the subject of at least 10 lawsuits either pending or settled and was involved in possible check kiting and money laundering. In total, IRS records indicate over 1,500 owners/officers had been found by IRS to be responsible for non-payment of payroll taxes at 3 or more businesses, and 18 business owners/officers had been found by IRS to be responsible for not paying the payroll taxes for over 12 separate businesses. It should be noted that these numbers represent only those responsible individuals IRS found acted willfully in the non-payment of the businesses’ payroll taxes and who were assessed TFRPs—they do not represent the total number of repeat offenders with respect to non-payment of payroll taxes. Table 2 shows the number of individuals with TFRPs for two or more businesses. Our audits and investigations of the 50 case study businesses with tax debt found substantial evidence of abuse and potential criminal activity related to the tax system; 12 of these case studies follow. All of the case studies involved businesses that had withheld taxes from their employees’ paychecks and diverted the money to fund business operations or for personal gain. Employers are required by law to remit withheld taxes, and the employer’s matching contributions, to IRS or face potential civil or criminal penalties. Although we reviewed tax records and other information for all 50 cases, we performed a more in-depth review of 12 case study businesses for this report. IRS had filed a lien to protect the government’s interests for all of the 12 case studies, and had filed liens for all but 5 of the 38 cases presented in appendix II. Table 3 shows the results of 12 of the case studies we performed. Our audits and investigations of the 50 case study businesses with tax debt, 12 of which are detailed in table 3, showed abuse and potential criminal activity related to the tax system. The following provides some illustrative examples of several of these cases. Case 1 The owner of this automotive firm continued to draw about a six- figure income from the business and owned substantial real property while the business accumulated more than $3.5 million in unpaid federal payroll taxes over a 10-year period. For the last decade, this business has withheld taxes from its employees but remitted less than a quarter of the taxes actually owed. IRS found the owner of the company willful and responsible for not remitting the taxes, and IRS records indicate the owner avoided paying taxes and trust fund amounts by transferring $1.5 million in property after being assessed the TFRP and selling a personal residence valued at over $600,000. Case 2 This healthcare business, which owes almost $2.5 million of unpaid payroll taxes, repeatedly refused to remit withheld federal payroll taxes and the officers used the business to pay personal expenses. In addition, IRS records indicated the business’s officers attempted to avoid paying taxes by filing Chapter 11 bankruptcy on three separate occasions, two of which were dismissed. Around the time of the bankruptcy filings, the officers withdrew about $700,000 of cash from the business. IRS found three officers of the business to be willful and responsible for not remitting payroll taxes. Case 6 This consulting business accumulated almost $1.5 million in unpaid federal payroll taxes beginning over 10 years ago and over a half- million dollars in other federal taxes. The owner had multiple businesses that have not filed required tax returns. Additionally, the business owner has not filed personal returns since the early 1990s and owes over $400,000 in personal taxes. The owner received several cash loans from the business while not paying taxes, and business monies were diverted into the owner’s personal bank accounts. This business owner avoided IRS by changing representatives and attorneys, which has had the effect of stalling IRS actions with repeated requests for the same information. To avoid collection action, the owner sold assets to a relative after receiving notice that IRS was about to assess a TFRP. Case 7 This manufacturing business owes almost $1.5 million in unpaid payroll taxes for over 40 tax quarters. The owner also underreported tax liabilities and was found willful and responsible for not remitting payroll taxes from two other businesses. IRS found that business monies may be flowing into personal accounts, and that the owner has hidden business assets in his own name in order to prevent IRS seizures. The owner also gave business assets to a relative who has used them to start a new business. The owner used appeals and offers in compromise as a means to delay IRS collection efforts, and has already defaulted on an offer in compromise for earlier TFRPs. Case 10 This healthcare business has accumulated over $8 million in unpaid payroll taxes for almost 30 quarters. The owner was convicted of tax fraud. Despite living in a multi million dollar home, the taxpayer claimed inability to pay taxes due to financial hardship, and evaded IRS levies by using check cashing businesses and writing checks to himself, even paying himself a salary while incarcerated. Some of the owner’s properties were sold by creditors, and the owner set up a new business in one of the business’s properties bought by a relative. Although other creditors seized and sold property to settle debts, we found no evidence of IRS taking such actions. Case 11 The owners of this construction company accumulated almost $2.5 million in unpaid payroll taxes from over 50 tax quarters (over 12 years of non-payment). The owners also had tax debt from other businesses dating back to the early 2000s. IRS records indicate that the business owners underreported their personal income. Financial records indicate that the owners may be involved in illegal check kiting and money laundering dating back to the late 1990s, have several judgments outstanding, and at least 10 lawsuits pending or settled. IRS officials indicated that the owners have consistently stalled collection efforts through such means as filing for bankruptcy, which has kept IRS from seizing assets. Businesses that withhold money from their employees’ salaries are required to hold those funds in trust for the federal government. Willful failure to remit these funds is a breach of that fiduciary responsibility and is a felony offense. A business’s repeated failure to remit payroll taxes to the government over long periods of time affects far more than the collection of the unpaid taxes. First, allowing businesses to continue to not remit payroll taxes affects the general public perception regarding the fairness of the tax system, which may result in lower overall compliance. Second, because of failure of businesses to remit payroll taxes, the burden of funding the nation’s commitments, including payments to the Social Security and Hospital Insurance trust funds, falls more heavily on taxpayers who willingly and fully pay their taxes. Third, the failure to remit payroll taxes gives the non-compliant business an unfair competitive advantage because that business can use those funds that should have been remitted for taxes to either lower overall business costs or increase profits. Businesses that fail to remit payroll taxes may also under bid tax- compliant businesses, causing them to lose business and encouraging them to also become non-compliant. Fourth, allowing businesses to continue accumulating unpaid payroll taxes has the effect of subsidizing their business operations, thus enriching tax abusers or prolonging the demise of a failing business. Fifth and last, in an era of growing federal deficits and amidst reports of an increasingly gloomy fiscal outlook, the federal government cannot afford to allow businesses to continue to accumulate unpaid payroll tax debt with little consequence. For these reasons, it is vital that IRS use the full range of its collection tools against businesses with significant payroll tax debt and have performance measures in place to monitor the effectiveness of its actions to collect and prevent the further accumulation of unpaid payroll taxes. IRS has stated that the collection of unpaid payroll taxes is one of its highest priorities. However, IRS’s collection philosophy focuses on gaining voluntary compliance, even for recalcitrant businesses that repeatedly fail to remit payroll taxes and whose actions indicate no intention to become compliant. Businesses that continue to accumulate unpaid payroll tax debt despite efforts by IRS to work with them are demonstrating that they are either unwilling or unable to comply with the tax laws. In such cases, because the decision to not file or remit payroll taxes is made by the owners or responsible officers of a business, IRS should consider strong collection action against both the business and the responsible owners and officers to prevent the further accumulation of unpaid payroll taxes and to collect those taxes for which the business and owners have a legal and fiduciary obligation to pay. IRS faces difficult challenges in balancing aggressive collection actions against taxpayer rights and individuals’ livelihoods. However, to the extent IRS does not pursue aggressive collection actions against businesses with multiple quarters of unpaid payroll taxes, IRS is not acting in the best interests of the federal government, the employees of the businesses involved, the perceived fairness of the tax system, or overall compliance with the tax laws. Therefore, it is incumbent upon IRS to revise its approach and develop performance measures to provide for the effective use of the full range of available enforcement tools against egregious offenders to prevent those businesses from continuing to accumulate payroll tax debt. It is also incumbent upon IRS to proactively seek out and appropriately implement other tools (particularly those with demonstrated success at the state level) to enhance its ability to prevent the further accumulation of unpaid payroll taxes and to collect those taxes that are owed. Although IRS does need to work with businesses to try to gain voluntary tax compliance, for businesses with demonstrated histories of egregious abuse of the tax system, IRS needs to alter its approach to include focusing on stopping the accumulation of additional unpaid payroll tax debt by egregious businesses. To provide better monitoring and more detailed guidance on collection actions to be pursued against egregious payroll tax offenders, to strengthen existing collection tools, and to develop additional enforcement tools to effectively identify potential levy sources, we recommend that the Commissioner of Internal Revenue take the following six actions: Develop a process to monitor collection actions taken by revenue officers against egregious payroll tax offenders to ensure collection actions appropriately utilize all available collection tools contained in the IRM. Review current case prioritization and assignment practices to determine if IRS’s enforcement and collection procedures could be enhanced by requiring, to the maximum extent feasible, businesses with egregious payroll tax debt and the responsible owners/officers with a TFRP assessment be treated as a single unified and coordinated collection effort assigned to a single revenue officer. Develop and implement procedures to expeditiously file a Notice of Federal Tax Lien against property as soon as possible after payroll tax debt is identified (including cases in the queue awaiting assignment) and ensure liens are filed on both businesses with unpaid payroll taxes and owners/officers assessed a TFRP. Develop and implement procedures to monitor and report on revenue officers’ compliance with the new TFRP assessment time frames to ensure revenue officers are making TFRP determinations and assessments in a timely manner. Develop performance goals and measures that specifically evaluate the accumulation of unpaid payroll taxes by businesses (especially egregious businesses with over 20 quarters of payroll tax debt), the extent and timeliness of TFRP assessments, and the effectiveness of actions taken to collect unpaid payroll taxes and TFRP assessments. Work with states that have developed procedures for matching financial accounts to tax debts to evaluate the potential for IRS to either develop and implement similar measures or partner with states that currently have that tool to leverage their efforts to assist revenue officers in identifying a business’s leviable assets. In commenting on a draft of this report, IRS recognized that all appropriate tools must be used to bring payroll tax offenders into compliance and concurred with all six of our recommendations. IRS noted that it had implemented numerous actions to improve its tax collection processes and procedures as well as to prioritize assignment of cases. It also noted that it continues to explore other opportunities. In particular, IRS cited its projects to increase its focus on businesses that accumulate multiple periods of unpaid payroll taxes and to improve the timeliness of lien filing and TFRP determinations. With respect to our five recommendations for IRS to review or revise its collection policies and to strengthen its existing collection tools to be used in dealing with egregious payroll tax offenders, IRS agreed to evaluate its practices and develop appropriate changes. Specifically, IRS agreed to (1) explore the value of using existing data to evaluate collection actions taken by revenue officers, (2) assign a single revenue officer to collect both a business’s egregious unpaid payroll tax debt and the responsible owners/officers with a TFRP assessment when feasible, (3) evaluate its existing practices and determine appropriate changes to its lien filing procedures to allow liens to be filed as soon as a payroll tax liability is identified, (4) consider ways to use its TFRP reports to monitor and report on revenue officers’ compliance with new TFRP assessment time frames, and (5) evaluate the effectiveness and feasibility of establishing performance goals and measures on the timeliness of TFRP assessments. With respect to our recommendation to work with states that have developed procedures for matching financial accounts to tax debts to identify levy sources, IRS agreed with our recommendation. IRS said it would work with those states to determine the effectiveness of their programs and whether a similar program in IRS would be cost effective and consistent with privacy laws. As agreed with your offices, unless you announce its contents earlier, we will not distribute this report until 30 days from its date. At that time, we will send copies of this report to the Secretary of the Treasury, the Commissioner of the Financial Management Service, the Commissioner of Internal Revenue, and interested congressional committees and members. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-3406 or sebastians@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To identify the magnitude of unpaid payroll tax debt, we obtained IRS’s database of unpaid taxes as of September 30, 2007. We extracted all payroll tax debt from that database and performed analysis to identify the number of businesses with tax debt and the total dollar value of tax debt associated with those businesses. We analyzed and summarized the overall payroll tax debt by the number of tax quarters of payroll tax owed by businesses; the tax period for which the debt was owed; the amount of the tax debt associated with interest, penalties, and assessed taxes; and the collection status of the debt, such as whether it is awaiting assignment, assigned in the field for collections, or coded as being currently not collectible. We also analyzed the tax debt to determine the date on which IRS will be statutorily prohibited from seeking collection from tax debtors and will remove the tax debt from its records. We requested that IRS perform specific data analysis of its tax records to identify amounts that should have been remitted by businesses for those trust funds, but were not, to develop an estimate of the total amount that the General Fund subsidizes the Social Security and Medicare Part A trust funds due to unpaid taxes. To validate IRS’s estimate, we compared that analysis to one prepared by IRS as of September 30, 1998, during one of our previous audits. At that time, IRS estimated the cumulative amount of the subsidy to be $38 billion. Because IRS removes tax debt from its records once the debt’s statutory collection period expires (generally 10 years from the date the tax is assessed), those estimates represented approximately a 10-year subsidy. To further validate the 10-year estimate, we obtained from IRS the annual increase in the subsidy based on unpaid taxes. IRS determined the subsidy to be between $2 billion to $4 billion annually. IRS developed its estimates based on data contained in its masterfile of tax information, which we audit as part of IRS’s annual financial statement audit. To identify IRS’s reports and measures to manage unpaid payroll taxes, we discussed IRS’s tracking of cases with cognizant managers and revenue officers. In addition, we reviewed IRS’s reported measures in both the IRS Databook and IRS’s Management Discussion and Analysis accompanying its annual financial statements. To determine IRS policies and procedures in place to prevent the non- payment of payroll taxes and to collect outstanding payroll taxes, we reviewed IRS’s policies as laid out in the Internal Revenue Manual (IRM) and discussed those policies and procedures with cognizant IRS officials and revenue officers. We also reviewed certain Treasury Inspector General for Tax Administration (TIGTA) and IRS reports related to the collection of unpaid payroll taxes. To supplement our discussions with IRS officials on tax collection activities, we also interviewed a number of state tax collection officials, including officials from Georgia, Kentucky, Maryland, and North Carolina, regarding tools and procedures used by those states to collect unpaid taxes. Additionally, we reviewed a sample of 76 businesses whose owners/officers IRS found personally liable for the failure to remit payroll taxes withheld from employees’ paychecks. The sample was originally selected as part of our audit of IRS’s fiscal year 2007 financial statements. The primary purpose of the sample was to determine whether IRS was properly recording payments to all related parties. However, we also performed other tests of IRS’s controls using this same sample. Although we identified issues related to the timeliness of certain collection actions based upon that sample, we are unable to project these results because the sampling units used for the financial statement audit were payments rather than accounts. We analyzed tax transcripts and other IRS records for those cases with assessed TFRPs to identify the dates that IRS revenue officers (1) initiated contact with the business, (2) made the determination to pursue the TFRP against the officers, and (3) assessed the TFRP. To further review IRS’s collection actions, we also performed a macro- analysis of IRS’s overall inventory of unpaid payroll tax debts. We used macro-analysis to determine such factors as the percentage of payroll tax debt with liens. We also used macro-analysis to determine the most common types of industries with unpaid payroll taxes. We analyzed IRS’s database of unpaid taxes and the information using the North American Industry Classification (NAIC) system codes in that database. Using those codes, we were able to identify the industry type for about 70 percent of the payroll tax debt. To determine whether businesses with unpaid payroll taxes were engaged in abusive or potentially criminal activities with regard to the federal tax system, we used data mining techniques to identify 50 businesses as illustrative case studies based on criteria such as businesses with large dollar amounts of unpaid payroll taxes accumulated over multiple tax quarters. For those businesses, we reviewed IRS’s collection actions and discussed the appropriateness of those actions or lack of actions with IRS revenue officers. We obtained copies of IRS’s automated tax transcripts and other tax records (e.g., revenue officers’ notes) from IRS. We also performed additional searches of financial and public records. In cases where record searches and IRS tax transcripts indicated that the owners or officers of a business were involved in other related businesses that had unpaid federal taxes, we performed additional analysis of those related businesses and the owners/officers. We conducted this performance audit from April 2007 through May 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. For the IRS databases we used, we relied on the work we performed during our annual audits of IRS’s financial statements. While our financial statement audits have identified some data reliability problems associated with the coding of some of the fields in IRS’s tax records, including errors and delays in recording taxpayer information and payments, we determined that the data were sufficiently reliable to address the report’s objectives. Our financial audit procedures, including the reconciliation of the value of unpaid taxes recorded in IRS’s masterfile to IRS’s general ledger, identified no material differences. Table 3 provided data on 12 detailed case studies. Table 4 shows the remaining 38 case studies that we audited. As with the 12 cases, we also found substantial evidence of abuse or potentially criminal activity related to the federal tax system during our review of these 38 case studies. The following individuals made major contributions to this report: William J. Cordrey, Sean Bell, Russell Brown, Ray Bush, Kenneth Hill, Delores Lee, David Shoemaker, Lisa Warde, Tina Wu, and J. Mark Yoder. | GAO previously reported that federal contractors abuse the tax system with little consequence. While performing those audits, GAO noted that much of the tax abuse involved contractors not remitting to the government payroll taxes that were withheld from salaries. As a result, GAO was asked to review the Internal Revenue Service's (IRS) processes and procedures to prevent and collect unpaid payroll taxes. Specifically, GAO was asked to determine (1) the magnitude of unpaid federal payroll tax debt, (2) the factors affecting IRS's ability to enforce compliance or pursue collections, and (3) whether some businesses with unpaid payroll taxes are engaged in abusive or potentially criminal activities with regard to the federal tax system. To address these objectives GAO analyzed IRS's tax database, performed case study analyses of payroll tax offenders, and interviewed collection officials from IRS and several states. IRS records show that, as of September 30, 2007, over 1.6 million businesses owed over $58 billion in unpaid federal payroll taxes, including interest and penalties. Some of these businesses took advantage of the existing tax enforcement and administration system to avoid fulfilling or paying federal tax obligations--thus abusing the federal tax system. Over a quarter of payroll taxes are owed by businesses with more than 3 years (12 tax quarters) of unpaid payroll taxes. Some of these business owners repeatedly accumulated tax debt from multiple businesses. For example, IRS found over 1,500 individuals to be responsible for nonpayment of payroll taxes at three or more businesses, and 18 were responsible for not remitting payroll taxes for a dozen different businesses. Although IRS has powerful tools at its disposal to prevent the further accumulation of unpaid payroll taxes and to collect the taxes that are owed, IRS's current approach does not provide for their full, effective use. IRS's overall approach to collection focuses primarily on gaining voluntary compliance--even for egregious payroll tax offenders--a practice that can result in minimal or no actual collections for these offenders. Additionally, IRS has not always promptly filed liens against businesses to protect the government's interests and has not always taken timely action to hold responsible parties personally liable for unpaid payroll taxes. GAO selected 50 businesses with payroll tax debt as case studies and found extensive evidence of abuse and potential criminal activity in relation to the federal tax system. The business owners or officers in our case studies diverted payroll tax funds for their own benefit or to help fund business operations. |
Dental services cover an array of procedures, from preventive services, such as cleanings, to more complex services, such as root canals (see table 1). Most individuals with dental coverage have private dental insurance. Private dental insurance plans may be a stand-alone plan or be included as a part of medical insurance. Stand-alone dental plans require individuals to enroll separately; they are not a part of the individual’s medical insurance plan. The types of dental services covered by dental plans vary widely among private plans. For example, one plan may include “comprehensive” care such as routine diagnostic and preventive services, restorative services, and oral surgery, while another may cover more limited services such as emergency care only. Cost sharing for dental services usually involves an annual deductible—and according to a Bureau of Labor Statistics survey of employers, in 2008, the median deductible was $50 per person. After the individual meets the deductible, dental plans may pay a percentage of covered services up to a maximum annual benefit. In 2008, the reported median annual maximum was $1,500. Individuals may also obtain dental coverage in other ways, such as through federal programs. Federal programs may cover dental services as a required benefit, support purchase of stand-alone dental coverage for eligible beneficiaries, or support coverage of dental services as part of broader coverage under individuals’ medical coverage plans, for example. Benefits included in these types of coverage may vary widely depending on factors such as type of plan purchased, family income, or veteran’s status (see table 2). HHS’s HRSA reported that in 2011, over 4 million patients used dental services at federally funded health centers. Under the Health Center Program, health centers—which must be located in federally designated medically underserved areas or serve a federally designated medically underserved population—are required to provide pediatric dental screenings and preventive dental services, as well as emergency medical referrals, which may also result in the provision of dental services. Health centers may provide required services, including required dental services, directly or via formal contract or referral agreements. However, health centers are not required to provide a full range of dental services. A health center must establish a fee schedule for its services that is consistent with locally prevailing rates and reflects the health center’s reasonable costs of providing services. A health center must also establish a sliding fee schedule for individuals who earn annual incomes equal to or less than 200 percent of the FPL. 2011, over 92 percent of the more than 20 million patients served by health centers nationwide had income less than or equal to 200 percent of the FPL (and were eligible for a sliding scale fee based on income and family size). The 2013 federal poverty guidelines state that a family of four at 200 percent of the poverty level has an income of $47,100 per year. The Patient Protection and Affordable Care Act (PPACA) has designated pediatric dental care as an essential health benefit that new health plans must cover in the new health care exchanges and the small-group and individual markets. Exchanges may allow plans to offer stand-alone dental coverage providing, at a minimum, pediatric dental care. Adult dental coverage is not included as an essential health benefit. Plans that have grandfathered status under the law are not required to offer pediatric dental coverage. PPACA has the potential to change the benefits and out-of-pocket payments associated with dental coverage; however, the extent of these changes is uncertain. The rate of individuals with dental coverage remained largely unchanged from 1996 to 2010; around 62 to 63 percent of the population had private or Medicaid or CHIP dental coverage. In addition, from 1996 to 2010, the percentage of individuals who had a dental visit remained about the same, around 41 to 43 percent. Our analysis of MEPS data showed that overall, the rate of dental coverage—the percentage of individuals reporting that they had dental coverage through private insurance or Medicaid or CHIP—remained relatively unchanged from 1996 to 2010.62 percent of individuals reported having dental coverage, and in 2010, 63 percent reported having dental coverage. For about 10 to 12 percent Specifically, in 1996, of the population, including many individuals covered by Medicare and other federal health programs, dental coverage status is unknown. While the overall percentage of individuals with dental coverage generally stayed the same between 1996 and 2010, the percentages with private dental and without insurance coverage decreased and the percentage with Medicaid dental coverage increased. Specifically, the percentage of individuals with private coverage was 53 percent in 1996 and 50 percent in 2010 (see fig. 1). The percentage of individuals with dental coverage under Medicaid increased steadily in each of the years we examined, from 9 percent in 1996 to 13 percent in 2010. The percentage of individuals reporting that they did not have dental coverage decreased from 28 percent in 1996 to 25 percent in 2010, leaving at least one in four individuals with no dental coverage—approximately 76 million people—in 2010. For 10 to 12 percent of individuals in each year we examined, it is unknown whether they had dental coverage. These individuals reported having some type of health coverage, such as Medicare coverage, but the MEPS survey structure did not allow us to determine whether that health coverage included dental coverage. Rates of private dental coverage among individuals in the high-income category (over 400 percent of FPL) were higher than any other income category in 1996, 2004 and 2010 (see fig. 2). Individuals in the poor-, low-, and middle-income groups saw a decline in the rates of private coverage over the same period. The percentage of individuals with Medicaid dental coverage increased from 9 percent in 1996 to 13 percent in 2010. This trend was largely driven by an increase in children covered by Medicaid, which requires pediatric dental coverage. The overall percentage of children (ages 0-20 years) reported to have dental coverage—through private coverage or Medicaid—increased from 72 percent (59 million) in 1996 to 81 percent (71 million) in 2010, and fewer children were uninsured, because more children were covered by Medicaid in 2010 than in prior years. The percentage of children with dental coverage through Medicaid increased from 18 percent (15 million) in 1996 to 33 percent (29 million) in 2010 (see fig. 3). This was the largest increase in coverage for any age group in Medicaid. The percentage of children with private dental coverage decreased from 54 percent (44 million) in 1996 to 48 percent (42 million) in 2010. In addition, rates of children without dental coverage declined from 27 percent (22 million) to 17 percent (15 million) over the same period. The percentage of individuals who used dental services—those who reported having at least one dental visit during the year—remained relatively unchanged at around 40 percent from 1996 to 2010. Specifically, about 43 percent of individuals in 1996 and 41 percent in 2010 had a dental visit. Table 3 shows our analysis of MEPS data on the use of dental services. Trends in dental visits by individuals with private dental coverage largely explained why the percentage of individuals with a dental visit remained relatively unchanged from 1996 to 2010. Specifically, over this period the percentage of individuals with private dental coverage who had a dental visit remained the same, at around 56 percent (see table 4), and individuals with private coverage made up a large majority—around 80 to 85 percent—of the population with dental coverage during that time period. The number of individuals without dental coverage who had a dental visit declined—from 26 percent, or 19 million individuals, in 1996 to 18 percent, or 14 million individuals, in 2010. The percentage of individuals with Medicaid who had a dental visit increased from 1996 to 2010. This increase reflects an increase in the number of children with Medicaid coverage who had a dental visit, although these children still had dental visits at lower rates than privately insured children (58 percent). The percentage of children with Medicaid dental coverage with a dental visit increased from 28 percent, or 4 million children, in 1996 to 37 percent, or 11 million children, in 2010. (See fig. 4.) Among individuals who reported having a dental visit, there was an increase in the percentage reporting that they received diagnostic and preventive services (for example, exams and cleanings) and a decrease in those reporting that they received other services, such as restorative services (for example, fillings), from 1996 to 2010. Specifically, the percentage of visits for diagnostic and preventive services as a proportion of total dental services increased (see fig. 5). Seventy-six percent of dental visits in 2010 consisted of diagnostic or preventive services (43 and 33 percent, respectively). This is an increase from 69 percent in 1996, when diagnostic and preventive services made up 40 and 29 percent of services received, respectively. The percentage of visits for other types of services decreased from 1996 to 2010. Specifically, restorative services—such as fillings—decreased slightly from 8 percent to 6 percent as a proportion of total dental services received in those years. Other services that decreased included prosthetic and orthodontic services. Average annual payments made on behalf of or by individuals for dental services—including payments from other payers such as insurers and out-of-pocket payments—increased from 1996 to 2010. Average annual inflation-adjusted dental payments increased 26 percent from $520 per year in 1996 to $653 per year in 2010 (see table 5). The average annual payments made—including out-of-pocket payments and payments by other payers—increased 24 percent for the privately insured, 39 percent for individuals with Medicaid, and 38 percent for those without dental coverage. In addition, in 2010, average annual dental payments (out of pocket and payments made by other payers) for those with private coverage were nearly twice as much as the payments made by and on behalf of individuals with Medicaid coverage. Average annual payments for dental services varied across income levels. Payments made by and on behalf of individuals in the low-, and middle-income groups increased steadily from 1996 to 2010. However, average annual payments made by and on behalf of individuals who were poor increased from $373 in 1996 to $493 in 2004, and then decreased to $437 in 2010. For low-income individuals, that is, individuals with incomes at or below 200 percent but above 100 percent of the FPL, dental payments had the largest increase. Specifically, average annual dental payments for the low-income group increased from $393 in 1996 to $558 in 2010 (see fig. 6), about 42 percent. Average annual dental payments also increased for middle- and high-income individuals—23 and 25 percent, respectively. Individuals’ out-of-pocket payments for dental services, separate from payments by other payers, when adjusted for inflation, generally increased from 1996 to 2010 (see fig. 7). Specifically, average annual out-of-pocket payments made by individuals with private coverage increased 21 percent from 1996 to 2010—from $242 to $294. Individuals with no dental coverage experienced the greatest increase in average annual out-of-pocket payments, from $392 to $518, a 32 percent increase. For individuals with Medicaid coverage, their average annual out-of-pocket payments remained relatively unchanged, $64 in 1996 and $59 in 2010. Dental fees charged by dentists and health centers varied across geographic areas and within communities. For 24 common dental procedures, dental fees charged by local dentists varied significantly between the 18 communities we examined. In addition, dental fees varied widely within communities. Dental fees also varied between local dentists and health centers that serve residents of the same community, but all health centers are required to offer sliding fee schedules for low-income individuals. For 24 common dental procedures, midpoint dental fees—the amount where half of fees charged were higher and half were lower—varied widely between the 18 communities we examined. For example, midpoint dental fees for an adult prophylaxis (commonly called teeth cleaning) in large communities ranged from $76 in Nashville, Tennessee, to $155 in New York, New York. In smaller communities, midpoint fees ranged from $59 in Jackson, Tennessee, to $88 in Fresno, California (see fig. 8). Similarly, midpoint fees charged for a child prophylaxis ranged from $55 to $105 in large communities and $48 to $71 in small communities. Dental fees, as with other health care costs, vary by location. (See tables 10 to 27 in app. IV for more information on the range of dental fees for other common dental procedures in selected communities.) Several factors can contribute to geographic variation in dental fees, including local wages and the cost of space and equipment needed to operate a practice. Although we identified no current peer-reviewed research that established a correlation between individual factors and the level of dental fees, geographic variation in spending for medical services is well documented. For example, the Congressional Budget Office reported that a number of factors, including facilities, supplies, and wages, influence geographic variation in health care spending. The Centers for Medicare & Medicaid Services (CMS), in setting Medicare payment rates for health care services, establishes a geographic practice cost index for each Medicare payment locality to account for variation in practice expenses. CMS reported that it did not have any comparable practice cost index for Medicaid dental fees. Dental claims data also showed significant differences within communities for all 24 common dental procedures we examined. Fees within the private practice dental setting are affected by local market conditions and decisions within the individually owned practices. FAIR Health dental claims data showed that within communities the difference between midpoint and upper-end fees was as high as 143 percent for localized delivery of antimicrobial agents. Upper-end fees were at least double the midpoint fees in at least one community for 8 of the 24 common procedures we examined (see table 6). (See app. IV for information on all 24 procedures in all 18 selected communities.) Dental fees within communities also varied widely for diagnostic procedures. For the most common diagnostic procedure, a periodic oral evaluation of an established patient, the percentage difference between reported midpoint and upper-end fees in large communities ranged from 20 percent to 142 percent (see table 7). For example, in Miami, Florida, half of the fees charged by dentists for a periodic oral examination of an established patient were $62 or less, but 5 percent of fees charged for that procedure were $150 or more, a 142 percent difference. In small communities, the percentage difference was less, ranging from 17 percent to 58 percent. Dental fees within communities also varied widely for restorative procedures. For the most common restorative procedure, a filling, the percentage difference between reported midpoint and upper-end fees in large communities ranged from 19 percent to 67 percent (see table 8). For example, in Phoenix, Arizona, half of the fees charged by dentists for a filling were $195 or less, but 5 percent of fees charged for that procedure were $325 or more, a 67 percent difference. In small communities, the percentage difference was smaller, ranging from 13 percent to 38 percent. Dental fees also varied between midpoint fees of local dentists billing private insurers and the full fees of the federally funded health centers that serve residents of the same community. In the 18 communities we examined, full dental fees for a tooth extraction (the most common oral surgery procedure) were typically lower at health centers. For patients with incomes at or below 100 percent of the FPL, 10 health centers offered a 100 percent fee discount and 8 health centers had fees ranging from $16 to $148 for extracting a tooth (see table 9). Health centers’ full fees—which fewer than 8 percent of health center patients pay—were often, but not always, lower than the midpoint fees charged by local dentists. In some cases, midpoint fees charged by local dentists were higher than health center fees by a wide margin. For example, in Los Angeles, the midpoint fee charged by local dentists for a tooth extraction was nearly 3 times the full fee charged by a health center serving residents of the same community. In other cases, full fees were higher than those of local dentists. For example, in Fresno the health center’s full fee was $177 for a tooth extraction compared to the midpoint fee charged by local dentists of $142. (See app. IV for additional information on local dentist fees and health center dental fees.) To assist health centers in establishing sliding fee schedules for low- income patients, HRSA officials told us that, as of June 2013, they were in the process of preparing guidance on discounting fees for all services provided as part of a health center’s scope of project, including dental services. In technical comments on a draft of this report, HHS noted that the draft guidance was in final clearance, indicating that it was uncertain whether the final policy would include guidance on establishing full-fee schedules. According to HHS, there is often a significant unmet need for dental care services among patients served by health centers and as a result, health centers and their community boards must make decisions about whether to provide additional dental services to meet this need and develop a fee schedule consistent with Health Center Program requirements. HHS commented that variation in fees and discounts reflect each health center's unique community characteristics as well as the board's decision on how best to balance their ability to provide these services against the unmet need in their community. HHS reviewed a draft of this report and provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, interested congressional committees, and others. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or iritanik@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To determine national trends in U.S. dental expenditures, we obtained national expenditure data from HHS’s Centers for Medicare & Medicaid Services’ (CMS) Office of the Actuary. In 2011, national expenditures for dental services in the United States were about $108 billion, an inflation- adjusted increase from $64 billion in 1996 (see fig. 9). Inflation-adjusted expenditures in the United States for dental services rose an average of 3.6 percent per year between 1996 and 2011—ranging from a 0.8 percent decrease in 2009 to a 6.9 percent increase in 2002. Individuals who lived in urban and rural areas reported dental visits at different rates, with individuals in urban areas reporting higher rates of dental visits. For example, in all age groups, fewer individuals in rural areas reported having a dental visit than individuals in urban areas (see fig. 10). Specifically, 34 percent of individuals ages 65 and older living in rural areas had dental visits in a year, compared with 45 percent of their counterparts in urban areas, for 2008 through 2010. The rates for children were similar: 35 percent of children 0 to 20 years old living in rural areas reported a dental visit in a year, compared with 45 percent of children in urban areas. To provide information on the trends in dental coverage rates, use of dental services, and payments by individuals and other payers for dental services, we analyzed nationwide data from the Medical Expenditure Panel Survey (MEPS). We examined and analyzed data from 3 years: 1996, 2004, and 2010. We selected these years for our analysis because 1996 was the first year MEPS was administered and 2010 was the most recent year for which data were available, we included 2004 to provide three points in time for our analysis. MEPS is administered by the Department of Health and Human Services’ (HHS) Agency for Healthcare Research and Quality (AHRQ) and is a nationally representative survey of the noninstitutionalized population, including families, medical providers, and employers. MEPS collects self-reported information on individuals’ demographics, health and insurance status, and use of medical services by setting and provider type, as well as expenses and payments related to those medical and dental visits, among other things. We analyzed responses to MEPS questions about types of dental insurance coverage, number of dental visits, types of dental procedures received, and payments made related to those visits. A dental visit refers to care by or visits to any type of dental care provider, including general dentists, dental hygienists, dental technicians, dental surgeons, orthodontists, endodontists, and periodontists. For this report, we referred to all visits as dental visits. To determine whether an individual had private dental insurance coverage, we examined MEPS data and noted whether the individual reported having dental insurance or had private insurance pay on a dental claim during the year. Thus, we considered an individual to have private dental coverage if he or she (1) reported having dental insurance or (2) had that insurer pay on a dental claim. To identify adult Medicaid coverage by state, we used a list of states provided by the Centers for Medicare & Medicaid Services (CMS) for 2004 and 2010. For states that were identified as having “limited” or “emergency only” dental coverage, we did not consider those benefits to be Medicaid dental coverage. Our analysis of adult Medicaid coverage based on the list from CMS resulted in only small differences from a similar analysis conducted by AHRQ for its Chartbook 17 publication that examines dental coverage and expenses. To categorize dental procedure codes, we consulted with AHRQ and used the same categories used by AHRQ for its Chartbook 17. For example, AHRQ placed various dental procedure codes related to preventive dental care—such as teeth cleaning and sealant application— in the preventive services category. In MEPS, expenses are defined as the sum of payments for care received, including out-of-pocket payments and payments made by private insurance, Medicaid, Medicare, and other sources. For this report, we referred to MEPS expenses as payments—payments by insurers or other payers as well as out-of-pocket payments made by individuals. To present the dental payments in constant 2010 dollars, we used the Consumer Price Index for All Urban Consumers (CPI-U) as the price deflator for both the aggregate dental payments and the out-of-pocket payments. To identify potential differences in dental service use and payments by demographic groups, we looked at responses by age group, income level, and urban or rural area. We examined income level in terms of the federal poverty level. To conduct the urban and rural analysis, we assigned individuals to one of three groups based on the county they lived in— each county was designated urban, suburban, or rural based on the Rural Urban Continuum Codes (RUCC).follows: those who lived in counties designated 1, 2, or 3 in the RUCC were urban; counties designated 4, 6, or 8, suburban; and in counties designated 5, 7, or 9, rural. We categorized those individuals as In conducting the urban and rural analysis, to ensure adequate sample sizes, we combined MEPS data for 2008, 2009, and 2010. This was the only analysis where a combination of years was necessary. Despite this effort, small sample sizes for some analyses limited the reliability of the results, and in those cases, we did not report those analyses. To determine the reliability of the MEPS data, we reviewed related documentation, interviewed agency officials, and identified other studies that used MEPS to address similar research questions to compare the published data with our findings. We determined that the MEPS data were sufficiently reliable for the purposes of our report. To determine the extent to which dental fees varied between and within selected communities, we analyzed dental insurance claims data and dental fees charged by selected health centers in nine states corresponding to the nine U.S. Census Bureau divisions. Using 2010 Census data, we selected two communities in each state, one large based on population and volume of dental claims and one small based on population. Our nine selected community pairs, large and small (with selected geozips), were Phoenix (850) and Flagstaff (860), Arizona; Los Angeles (900) and Fresno (936), California; Miami (331) and Palm Coast (321), Florida; Chicago (606) and Champaign (618), Illinois; Boston (021) and Pittsfield (012), Massachusetts; Minneapolis (554) and Mankato (560), Minnesota; New York (100) and Elmira (148), New York; Nashville (372) and Jackson (382), Tennessee; and Dallas (752) and San Angelo (768), Texas. To identify representative dental fees in each community, we analyzed (1) 2012 dental insurance claims data compiled by FAIR Health, Inc., as of January 2013, and (2) dental fee schedules from selected health centers serving these communities. Based on the FAIR Health claims data, we selected 24 of the most of commonly billed dental procedures in six categories of the American Dental Association’s (ADA) Current Dental Terminology (CDT) codes: five diagnostic, five preventive, five restorative, three endodontic, three periodontic, and three oral surgery.consulted with academic dental experts and ADA officials to confirm that We the CDT codes we selected were representative of common dental procedures. We also compared our list of common dental procedures to data from the ADA Survey of Dental Fees. We also obtained health center dental fees for these 24 procedures in our 18 selected communities. Based on a list of health centers provided by HHS’s Health Resources and Services Administration (HRSA), we selected one health center (the center that served the most dental patients) in each community. We obtained full-fee and sliding fee schedules from each health center, although some health centers did not provide or bill separately for all 24 procedures. The 18 health centers studied may not be representative of the more than 1,200 health centers supported by HRSA. For each selected dental procedure, we extracted from the FAIR Health data set the midpoint dental fee (the 50th percentile) and the upper-end fee (the 95th percentile). A percentile indicates the percentage of reported fees that were below the stated amount; for example, 95 percent of reported fees fall below the 95th percentile. Within each community, we selected one geozip (an area that shares the first three digits of a postal zip code) with the most dental claims to represent the community. Because geozip boundaries do not align with metropolitan statistical area designations, we selected one geozip (the one with the most dental claims) to represent each large and each small metropolitan area. Large metropolitan areas can include multiple geozips. For example, the Los Angeles/Long Beach metropolitan area includes geozips 900 to 912 (excluding 909). To determine the reliability of the FAIR Health and health center data, we reviewed related documentation, interviewed agency officials and academic experts, and conducted data testing for missing data. We determined that both the FAIR Health and health center data were sufficiently reliable for the purposes of our report. This appendix presents information on local dentist fees and health center fees for 24 common dental procedures in the 18 communities (by specific geozips) we examined (see tables 10 to 27). In addition to the individual named above, Kim Yamane, Assistant Director; George Bogart; Carolyn Fitzgerald; Mollie Hertel; Elizabeth Morrison; and Terry Saiki made key contributions to this report. Oral Health: Efforts Under Way to Improve Children’s Access to Dental Services, but Sustained Attention Needed to Address Ongoing Concerns. GAO-11-96. Washington, D.C.: November 30, 2010. Medicaid: State and Federal Actions Have Been Taken to Improve Children’s Access to Dental Services, but More Can Be Done. GAO-10-112T. Washington, D.C.: October 7, 2009. Medicaid: State and Federal Actions Have Been Taken to Improve Children’s Access to Dental Services, but Gaps Remain. GAO-09-723. Washington, D.C.: September 30, 2009. Medicaid: Extent of Dental Disease in Children Has Not Decreased, and Millions Are Estimated to Have Untreated Tooth Decay. GAO-08-1121. Washington, D.C.: September 23, 2008. Health Resources and Services Administration: Many Underserved Areas Lack a Health Center Site, and the Health Center Program Needs More Oversight. GAO-08-723. Washington, D.C.: August 8, 2008. Medicaid: Concerns Remain about Sufficiency of Data for Oversight of Children’s Dental Services. GAO-07-826T. Washington, D.C.: May 2, 2007. Oral Health: Factors Contributing to Low Use of Dental Services by Low- Income Populations. GAO/HEHS-00-149. Washington, D.C.: September 11, 2000. Oral Health: Dental Disease Is a Chronic Problem Among Low-Income Populations. GAO/HEHS-00-72. Washington, D.C.: April 12, 2000. | High rates of dental disease remain prevalent across the nation, especially in vulnerable and underserved populations. According to national surveys, 42 percent of adults with tooth or mouth problems did not see a dentist in 2008 because they did not have dental insurance or could not afford the out-of-pocket payments, and in 2011, 4 million children did not obtain needed dental care because their families could not afford it. In 2011, the Institute of Medicine reported that there is strong evidence that dental coverage is positively tied to access to and use of oral health care. For families without dental coverage, federally funded health centers may offer an affordable dental care option. Health centers are required to offer sliding fee schedules with discounts of up to 100 percent for many low-income patients. GAO was asked to examine dental services in the United States. This report describes (1) trends in coverage for, and use of, dental services; (2) trends in payments by individuals and other payers for dental services; and (3) the extent to which dental fees vary between and within selected communities across the nation. To do this work, GAO examined HHS national health survey data and national dental expenditure estimates, dental insurance claims data, and health center dental fees in 18 selected communities (based on census region, population, and dental claims volume). GAO also interviewed HHS officials and academic experts. HHS provided technical comments on a draft of this report, which were incorporated as appropriate. Overall, trends in dental coverage show little change from 1996 to 2010--around 62 percent of individuals had coverage. The percentage of the population with private dental coverage decreased from 53 to 50 percent. Dental coverage through Medicaid or the State Children's Health Insurance Program (CHIP), which was established in 1997, rose from 9 to 13 percent. The increase was due primarily to an increase in the number of children covered by these federal-state health programs with mandated pediatric dental coverage. Individuals with no dental coverage decreased from 28 to 25 percent, and coverage for 10 to 12 percent of the population was unknown. Use of dental services--the percentage of individuals who had at least one dental visit--also remained relatively unchanged at around 40 percent from 1996 to 2010. Medicaid and CHIP beneficiaries, children in particular, showed increases in the use of dental services (from 28 to 37 percent), but still visited the dentist less often than privately insured children (58 percent in 2010). GAO's analysis showed that average annual dental payments--the total amount paid out of pocket by individuals and by other payers--increased 26 percent, inflation-adjusted, from $520 in 1996 to $653 in 2010. Average annual out-of-pocket payments increased 21 percent, from $242 to $294, for individuals with private insurance and 32 percent, from $392 to $518, for individuals with no dental coverage. Dental fees charged by local dentists and health centers varied widely. For 8 of 24 common procedures GAO examined, reported upper-end fees (the 95th percentile of the range in local dentist fees) were at least double the midpoint fees (the 50th percentile of the range in local dentist fees) in several communities. For example, in Miami, Florida, the upper-end fee of $150 for a periodic oral examination was more than twice the midpoint dental fee of $62. Dental fees also varied between local dentists billing private insurers and health centers serving residents of the same community. In general, most health centers in GAO's review offered a 100 percent discount--resulting in no fee--to the lowest-income patients for many, but not all, dental services. |
The Omnibus Budget Reconciliation Act of 1987 (OBRA 87) introduced major reforms in the federal regulation of nursing homes that responded to growing concerns about the quality of care that residents received. Among other things, these reforms revised care requirements that facilities must meet to participate in the Medicare or Medicaid programs, modified the survey process for certifying a home’s compliance with federal standards, and introduced additional sanctions and decertification procedures for homes that fail to meet federal standards. The federal responsibility for overseeing nursing facilities belongs to HCFA, an agency of the Department of Health and Human Services (HHS). Among other tasks, HCFA defines federal requirements for nursing home participation in Medicare and Medicaid and imposes sanctions against homes failing to meet these requirements. The law requires HCFA to contract with state agencies to survey nursing homes participating in Medicare and Medicaid. In California, DHS performs nursing home oversight, and its authority is specifically defined in state and federal laws and regulations. As part of this role, DHS (1) licenses nursing homes to do business in California; (2) certifies to the federal government, by conducting reviews of nursing homes, that the homes are eligible for Medicare and Medicaid payment; and (3) investigates complaints about care provided in the licensed homes. To assess nursing home compliance with federal and state laws and regulations, DHS relies on two types of reviews—the standard survey and the complaint investigation. The standard survey, which must be conducted no less than once every 15 months at each home, entails a team of state surveyors spending several days on site conducting a broad review of care and services with regard to meeting the assessed needs of the residents. The complaint investigation entails conducting a targeted review with regard to a specific complaint filed against a home. California state law mandates that a complaint must be investigated within 2 to 10 days, depending on the seriousness of the infraction being alleged. HCFA requires that any complaint involving immediate jeopardy to a resident’s health or safety be investigated within 48 hours. The state and HCFA each has its own enforcement system for classifying deficiencies that determines which remedies, sanctions, or other actions should be taken against a noncompliant home. During standard surveys, California’s DHS typically cites deficiencies using HCFA’s classification and sanctioning scheme; for complaint investigations, it generally uses the state’s classification and penalty scheme, which allows the imposition of penalties and other actions under state enforcement criteria. Table 1 shows HCFA’s classification of deficiencies and their accompanying levels of severity and compliance status. HCFA guidance also classifies deficiencies by their scope, or extent, as follows: (1) isolated, defined as affecting a limited number of residents; (2) pattern, defined as affecting more than a limited number of residents; and (3) widespread, defined as affecting all or almost all residents. HCFA guidance on citing a deficiency’s scope as “widespread” states that “‘the universe’ is the entire facility,” not just those who, by their condition, would have been affected by the deficiency cited. The example provided explains that if a facility was deficient in appropriately treating all of a facility’s tube-fed residents—but the number of tube-fed residents was less than the facility’s total number of residents—surveyors must cite the deficiency’s scope as “pattern” and not widespread. Whether a deficiency is judged by surveyors to be isolated, a pattern, or widespread has implications for enforcement. For example, under HCFA regulations, a home is to be cited for “substandard quality of care” when it has certain deficiencies exceeding a particular severity and scope level. Receiving a substandard rating is significant because, depending on a home’s past performance, such a rating can prompt stronger enforcement actions than are typically taken under HCFA policy. The deficiencies that can warrant a substandard rating involve federal requirements related to quality of care, quality of life, and resident behavior and facility practices. Any of these types of deficiencies involving immediate jeopardy to resident health and safety results in a substandard rating. In addition, these types of deficiencies lead to a substandard rating if they are of the following severity and scope combinations: a pattern of or widespread actual harm that is not immediate jeopardy; or a widespread potential for more than minimal harm that is not immediate jeopardy, with no actual harm. The work of our expert nurses indicates that some of California’s nursing home residents who died in 1993 received unacceptable care that, in certain cases, endangered their health and safety. We also found evidence that serious care problems exist today in California nursing homes. Data from standard and complaint surveys indicate that nearly a third of California’s nursing homes experience serious care problems. We examined medical records of residents who died in 1993 from such causes as malnutrition, dehydration, pressure sores, and urinary tract infections with sepsis (the presence of bacteria and toxins in the blood or tissue). Their deaths were alleged to have been caused by unacceptable nursing home care. The 3,113 cases of alleged unacceptable care were distributed across nearly three-fourths of California’s nursing homes in 1993. However, to avoid selecting isolated instances of such deaths, our cases were drawn from about 5 percent of California’s homes that had at least five of the allegedly avoidable deaths. Our review suggests that 34 residents—more than half of the 62 cases reviewed—received unacceptable care. Our expert nurses concluded that, in some of these cases, unacceptable care endangered residents’ health and safety. Care problems included dramatic, unplanned weight loss, failure to properly treat pressure sores, and failure to manage pain. The examples in figure 1 illustrate the nature of the care problems we identified. In other cases we reviewed from 1993, the care documented in the medical record was acceptable. For example, when nursing home staff recognized that a resident was having difficulty swallowing food, they changed her diet to pureed food and placed the resident in a restorative feeding program, where she received additional help in eating. Although the resident later refused all food and liquid and eventually died of dehydration, our expert reviewers concluded that the nursing home staff provided acceptable care during the resident’s 4-month stay in the home. The cause of death listed on her death certificate might raise questions about the care she received, but only medical record review could determine whether the care was acceptable. DHS surveyors identified a substantial number of homes with serious care problems through their annual standard surveys of nursing homes and through ad hoc complaint investigations. Through examining the most recent two surveys from homes that had at least two standard surveys conducted between July 1995 and February 1998, and that may have had complaint investigations in 1996 or 1997, we found that surveyors cited 407 homes—nearly a third of the 1,370 homes included in our analysis—for serious violations classified under the federal deficiency categories, the state’s categories, or both. These homes were cited for violations that caused death, seriously jeopardized residents’ health and safety, or were considered by state surveyors to have constituted substandard care. Figure 2 shows the distribution of the nursing homes included in our analysis by the seriousness of the federal and state violations cited. Caused Death or Serious Harm (407 Homes) Caused Less Serious Harm (449 Homes) The four wedges in figure 2 correspond to federal deficiency categories shown in table 1 and include comparable-level deficiencies cited using the state’s separate classification scheme, as follows: “Caused death or serious harm” represents any federal deficiency that surveyors classified as constituting immediate jeopardy or substandard care and California deficiencies of improper care leading to death, imminent danger or probability of death, intentional falsification of medical records, or material omission in medical records. “Caused less serious harm” represents federal violations constituting actual harm but not immediate jeopardy or substandard care and California violations that have a direct or immediate relationship to the health, safety, or security of a resident. “More than minimal deficiencies” represents federal violations that could cause more than minimal harm to residents if not corrected. “Minimal or no deficiencies” represents either no violations or federal violations that could have resulted in minimal harm to residents if not corrected. Figure 3 shows the distribution of types of deficiencies in the category called “caused death or serious harm” and gives examples of each type. The category “improper care leading to death” does not include all residents who died in homes cited for violations related to residents’ care, because the category “life-threatening harm” can also include such violations and associated deaths. We also found examples of poor care that were ranked by state surveyors as causing less serious harm under the federal and state classification systems. For example, the cases described in figure 4 were not classified in the group of “most serious” violations. Deficiencies classified as “potential for more than minimal harm”—corresponding to the “more than minimal deficiencies” category in figure 2—can also include problems more serious than their classification implies, as figure 5 shows. Homes with deficiencies classified as having “potential for minimal harm”—corresponding to the “minimal or no deficiencies” category in figure 2—are considered by HCFA to be in substantial compliance, as shown in table 1. However, figure 6 shows examples of deficiencies that California surveyors classified in this category in which the harm could be considered by some to be greater than minimal. The deficiencies that state surveyors identified and documented very likely capture part but not the full extent of care problems in California’s homes, for several reasons. Some homes can mask problems because they are able to predict the timing of annual reviews or because medical records sometimes contain inaccurate information that overstates the care provided, given the resident’s observed condition. In addition, state surveyors can miss identifying deficiencies because of limitations on the methods used in the annual review—methods established in HCFA guidance on conducting surveys—to identify potential areas of unacceptable care. The extent of care problems is likely to be masked because of the predictability of homes’ standard surveys. The law requires that a standard survey be unannounced, that it begin no later than 15 months after the last day of the previous standard survey, and that the statewide average interval between standard surveys not exceed 12 months. Because many California homes were reviewed in the same month—sometimes almost the same week—year after year, homes could often predict the timing of their next survey and, if inclined, prepare to cover up problems that may normally exist at other times. For example, a home that may routinely operate with too few staff could temporarily augment its staff during the period of the survey in order to mask an otherwise serious deficiency in staffing levels. Advocates and residents’ family members told us they believe that such staffing adjustments are common, given their own observations in homes they visited. At two homes we visited, we saw that the homes’ officials had made advance preparations—such as making a room ready for survey officials—indicating that they knew the approximate date and time of their upcoming oversight review. When we discussed these observations with California DHS officials, they acknowledged that a review of survey scheduling showed that the timing of some homes’ surveys had not varied by more than a week or so for several cycles. DHS officials have since instructed district office managers to schedule surveys in a way that reduces their predictability. The issue of the predictable timing of surveys is long-standing. In the mid-1980s, the Institute of Medicine recommended adjusting the timing of surveys to make them less predictable and maximize the element of surprise. It suggested that standard surveys be conducted between 9 and 15 months after the previous standard survey. In OBRA 87, the Congress established a civil monetary penalty to be levied against an individual who notifies a nursing home about the time or date of an impending survey. In 1995, HCFA issued guidance to states to keep the timing of the standard survey unpredictable by ensuring that all surveys are unannounced. However, the guidance is silent on varying the survey cycle as a way to reduce the predictability of these reviews. Since the guidance was issued, two studies have found that regular timing of surveys is still a problem. The National State Auditors Association found that in nine states it studied, the timing of inspections in some states was around the same date every year, which allowed nursing homes to predict when their survey would occur. Similarly, nursing home advocates in 41 states and the District of Columbia polled by HCFA noted that the predictability of surveys was a continuing problem. One state’s advocate noted that a home’s care, food, and environment change dramatically as the time of the home’s standard survey nears. Another reason quality problems in nursing homes escape detection is the questionable accuracy of some resident medical records. When conducting on-site reviews, surveyors screen residents’ medical records for indicators of improper care; if information in the records is misleading or omitted, surveyors may fail to identify care deficiencies. Studies of nursing home quality cite questionable accuracy of resident medical records as a problem. For example, one study found that nursing home staff often incorrectly record the amount of food consumed by residents, thus calling into question the information maintained on the adequacy of residents’ nutrition. Another study examined records on the use of restraints compared with actual restraint use. In this study, although nursing home records showed that staff had removed residents’ restraints every 2 hours as required, researcher observation revealed that, in fact, 56 percent of the residents had been continuously restrained for 3 hours or longer. In the course of reviewing the 1993 medical records, we also found inaccuracies and otherwise misleading information. The examples in figure 7, abstracted from the 1993 California records we reviewed, illustrate the implausibility or suspicious omissions of information contained in some residents’ records. We found discrepancies in about 29 percent of the 1993 California records we reviewed. Through medical record reviews as well as direct observation at two homes, we found that the standard surveys at these facilities failed to identify a number of serious care problems. In our visits to two facilities during their annual surveys, we arranged for our team of registered nurses to accompany the state surveyors and conduct concurrent surveys designed specifically to identify quality-of-care problems. Our survey methodology differed from the methodology specified by HCFA guidance and used by state surveyors in three major ways: (1) we selected a stratified, random sample of a much larger number of cases to review, including vulnerable populations such as new admissions and those at risk for pressure sores; (2) we collected uniform information on those cases using a structured protocol for observations, chart review, and staff interviews; and (3) we compared the results from those cases at each facility with data collected under the same sampling method at more than 60 other nursing homes nationwide, and then targeted our case review in areas where we identified a facilitywide pattern that could denote poor care. Using this methodology, we were able to spot cases in which the homes had not intervened appropriately for residents experiencing weight loss, dehydration, pressure sores, and incontinence—cases the state surveyors either missed or identified as affecting fewer residents. At the two homes where our nurses conducted their quality-of-care surveys, the findings of our team and those of DHS surveyors were similar in some respects and different in others. For example, state surveyors cited one of the homes (home A) for a high medication error rate that was not found by our surveyors. However, problems state surveyors missed included unaddressed nutrition and weight loss, failure to prevent pressure sores, and poor management of resident incontinence—cases in which the homes had not intervened appropriately. (See fig. 8 for examples of such problems in home A.) DHS surveyors classified home A’s violations as posing potential for more than minimal harm to residents and, according to standard practice for deficiencies classified at this level, required the home to produce a corrective action plan. In contrast, we determined, on the basis of the problems shown in figure 8, that this home had a pattern of poor care and classified this home’s care for unaddressed nutrition and weight-loss problems, pressure sore problems, and incontinence problems as conditions demonstrating actual harm. At home B, we noted that the state surveyors had found a considerable number of problems, including some that were similar to those we found. For example, both teams found pressure sore treatment and infection control deficiencies. The state surveyors also found problems we did not identify, including the home’s failure to provide oral hygiene to residents and to appropriately administer an intravenous medication to one resident. However, the state surveyors overlooked quality-of-care problems that we detected and considered serious. Among those missed were problems in the category of “failure to provide appropriate personal and preventive care.” (See fig. 9.) DHS surveyors classified home B’s violations as resulting in actual harm but determined that the harm was isolated rather than systemic. By defining the extent of the deficiencies as isolated, DHS followed its standard practice—for a deficiency cited at this level—of requiring the home to submit a corrective action plan. In contrast, by using a larger sample, we were able to establish a frequency of cases demonstrating a pattern of actual harm. Several factors account for the different assessments of care between the two survey teams. First, in reviewing medical records to identify areas with potential for poor care, our surveyors took random samples of cases from several types of residents, including the most vulnerable residents. Second, the number of cases our surveyors drew was large enough to estimate how common the problems were in the homes. Third, the information our surveyors collected from medical record reviews, staff interviews, and data analyses was entered into a structured format and compared with similar information from more than 60 other homes nationwide. This allowed our surveyors to pinpoint areas where care seemed problematic and review those cases thoroughly. HCFA policy establishes the procedures, or protocol, that state surveyors must follow in conducting a home’s standard survey. Selecting cases for review is an activity that occurs early in the standard survey of a home to identify potential instances of poor care. At the beginning of a standard survey, the nursing home administrator must supply surveyors with documents that specify, among other things, a census of residents by medical condition, such as numbers of individuals with pressure sores, indwelling catheters, and physical restraints. The state surveyors use this information to select the majority of cases for particular scrutiny during the survey. They may add to the list of cases after observing residents and talking with nursing home staff. HCFA’s protocol for selecting cases does not call for taking a random sample of sufficient size, however, and relies primarily on the use of professional expertise and judgment, based on numerous criteria that HCFA offers as guidance. While professional judgment is an essential component in identifying poor care, the nonrandom nature of the sample and its insufficient size precludes the state surveyor from easily determining the prevalence of the problems identified. The protocol our surveyors used for sampling allowed them to cast a wider net. Specifically, they took random samples of three groups of residents to target cases in which poor care would be most likely to surface. The three groups sampled were classified as “new admissions,” “long stays” (residents more than 105 days into their stay), and “sentinel events” (residents whose medical conditions put them at the greatest risk for poor outcomes). By stratifying the sample and taking a random selection of a sufficient number of each group, our surveyors could project the results of the samples to all residents in the home, thus assessing the potential prevalence of their initial review findings. For each resident in the sample, the survey team collected information from observations, chart reviews, and staff interviews assessing 75 elements reflecting quality-of-care outcomes. Our surveyors then profiled these findings—that is, they compared the data from the sampled cases with data collected under the same sampling method at more than 60 nursing homes in other states. Analyzing data collected from the cases sampled, our survey team compared a home’s rate of poor outcomes against the rates determined for the homes in other states. For example, they found that, at the two homes discussed, the rate of pressure sores was 27 percent and 21 percent of each home’s total residents, whereas the comparison homes’ average rate was roughly 8 percent. Being able to compare rates of medical conditions in a nursing home, such as the percentage of residents with pressure sores, allows the surveyor to determine whether the home is an outlier in comparison with other homes. Our surveyors then used this information to review residents’ care regarding specific conditions to determine whether the poor outcome rates were due to unacceptable care or were justifiable because of other factors. HCFA has just begun to implement a requirement for all nursing homes participating in Medicare and Medicaid to transmit electronically certain data they maintain on residents’ health and functional status. Having this information in computerized form could provide surveyors better access to residents’ outcome data, thus potentially enhancing surveyors’ ability to select cases for review more systematically and quickly. Access to information in this form could also facilitate assessing a home’s performance with regard to residents’ outcomes against an established average or norm. These benefits will depend, however, on ensuring that these data are valid and reliable reflections of residents’ status and care. Once surveyors find deficiencies through nursing home surveys, their next step is to have the homes correct their deficiencies and return to compliance with federal requirements. Despite HCFA’s goal to have nursing homes sustain compliance with federal requirements over time, our work in California showed that 1 in 11 California homes—serving thousands of residents—were cited twice in a row for “actual harm” violations. Relatively few disciplinary actions were taken against such homes because of HCFA’s forgiving stance on enforcement. HCFA’s termination policy is likewise generous—allowing California homes terminated from the program for serious problems to be easily reinstated—even though they often have serious care violations in subsequent surveys. Recognizing these and other weaknesses in the current process, California’s DHS has begun a “focused enforcement” effort and has implemented procedures to strengthen its use of available nursing home enforcement authority for facilities with the poorest past performance records. OBRA 87 requires the HHS Secretary to ensure that the enforcement of federal care requirements for nursing homes is adequate to protect the health, safety, welfare, and rights of residents. In the background to its final regulations, HCFA stated that its system of requirements implementing OBRA 87 reforms “was built on the assumption that all requirements must be met and enforced” and that its enforcement actions will encourage “sustained compliance.” In addition, HCFA noted that “our goal is to promote facility compliance by ensuring that all deficient providers are appropriately sanctioned.” However, our data suggest that current enforcement efforts in California are not reaching the stated goal to ensure that all requirements are met and deficient providers are appropriately sanctioned, and also may not fulfill the OBRA 87 promise to protect the health, safety, welfare, and rights of residents. National data indicate this problem is not limited to California. A significant number of homes in our analysis had repeated violations in categories that HCFA classifies as “serious” or “most serious.” Specifically, 122 homes—representing over 17,000 resident beds—were cited in both of their last two surveys for conditions causing actual harm or conditions that put residents in immediate jeopardy or caused death. The repeated deficiencies included, among others, problems with infection control, pressure sore treatment, and bladder continence care. Preliminary analysis of national data indicates that repeating serious deficiencies is more common nationally than in California. One in nine nursing homes in the United States—representing more than 232,000 resident beds—were cited in both of their last two surveys for conditions that caused actual harm or put residents in immediate jeopardy or caused death. Relatively few disciplinary actions have been taken against homes cited for repeated harm violations. Before OBRA 87, the only sanction available to HCFA and the states to impose against such noncompliant homes, short of termination, was to deny federal payments for new admissions. Because this sanction afforded HCFA and the states an opportunity to defer the decision to terminate, it was considered an “intermediate” sanction. OBRA 87 provided for additional intermediate sanctions, such as denial of payment for all admissions, civil monetary penalties, and on-site oversight by the state (“state monitoring”). Nevertheless, between July 1995 and May 1998, nearly three-quarters of those 122 homes—cited in at least 2 consecutive years for serious deficiencies—had no federal intermediate sanctions that actually took effect. Our review of federal actions taken against California’s noncompliant homes indicates that HCFA’s policies, as implemented by California’s DHS, have not led to sustained compliance, either for some homes immediately referred for sanctioning or for others given a grace period to correct their deficiencies. In addition, HCFA has reinstated California homes terminated for serious deficiencies that became problem homes soon after reinstatement. HCFA guidance instructs state agencies to immediately refer for federal sanctioning homes that meet HCFA criteria for posing the greatest danger to residents. The immediate referral contrasts with the practice of first granting homes a grace period to correct cited deficiencies. To qualify for immediate referral, homes must be cited for violations in the immediate jeopardy category or be rated as a “poor performer.” HCFA’s definition of poor performer itself is circumscribed such that the definition applies to relatively few homes. A home must have been cited on its current standard survey for substandard quality of care and have been cited in one of its two previous standard surveys for substandard quality of care or immediate jeopardy violations. Homes cited for cases of actual harm to residents—if assessed at the isolated level—do not satisfy HCFA’s criteria for the substandard quality-of-care classification. Since July 1995, when the federal enforcement scheme established in OBRA 87 took effect, about 25 California homes have been designated as poor performers and 59 homes have been cited for immediate jeopardy deficiencies. HCFA guidance permits the state to broaden the definition of poor performer, but California has chosen not to do so. Even homes immediately referred for sanctioning do not necessarily receive sanctions that take effect. Among California homes HCFA considers to have the most serious deficiencies that immediately jeopardize resident health and safety, only about half had any sanctions that actually took effect. If homes come into substantial compliance before sanctioning is scheduled to take effect, HCFA rescinds the sanction. In principle, sanctions imposed against a home remain in effect until the home corrects the deficiencies cited and until state surveyors find, after an on-site review (called a “revisit”) that the home has resumed substantial compliance status. HCFA’s guidance on revisits allows states to forgo an on-site visit and accept a home’s report of resumed compliance status if the home’s deficiencies are not more serious than the “potential for harm” range and do not constitute substandard care. HCFA officials told us this policy was put into place because of resource constraints. In California, however, this policy has been applied even to some of the immediate referral homes that continue to have deficiencies that put them out of substantial compliance upon revisit. Thus, our review of certain enforcement cases showed that HCFA failed to ensure that homes with a record of posing the greatest danger to residents had, in fact, resumed substantial compliance. For example, in the case of one home immediately referred for sanctioning, DHS surveyors made a few on-site reviews, but HCFA twice accepted the home’s self-reported statement of compliance without requesting DHS to revisit and independently verify that the home had fully corrected its deficiencies. Specifically, in an October 1996 survey, DHS cited the home for immediate jeopardy and actual harm violations, including improper pressure sore treatment, medication errors, insufficient nursing staff, and an inadequate infection control program. By early November 1996, however, surveyors had found in an on-site review that the problems had abated but had not fully ceased. A week later, the home reported itself to HCFA as resuming substantial compliance. HCFA accepted this report without further on-site review. About 6 months later (May 1997), in the home’s next standard survey, DHS found violations that warranted designating the home a poor performer. On a revisit to check compliance in July 1997, surveyors found new but less serious deficiencies. In August 1997, however, when the home reported itself in compliance, HCFA accepted the report without further verification. Between October 1996 and August 1997, HCFA imposed several sanctions but lifted them each time it accepted the home’s unverified report of resumed compliance. According to HCFA guidance, noncompliant homes that are not classified in the immediate jeopardy or poor performer categories do not meet HCFA’s criteria for immediate referral for sanctioning, even though residents may have suffered actual harm. Following this guidance, California’s DHS first notifies these homes of the sanctions it will recommend imposing unless the home resumes compliance. DHS revisits the homes where residents have suffered actual harm or worse to ensure that compliance has been achieved. In practice, on the basis of HCFA’s guidance, the state will forward notification of the recommended sanctions to HCFA only if the home fails to correct the deficiencies cited within a 30- to 45-day grace period allowed by HCFA. Although California’s DHS regulators have the option of referring the home immediately for disciplinary action, the accepted practice under HCFA’s guidance is to first allow the home to return to compliance status within the specified grace period. HCFA policy permits granting a grace period to this group of noncompliant homes, regardless of their past performance. Between July 1995 and May 1998, California’s DHS gave about 98 percent of noncompliant homes a grace period to correct deficiencies. For nearly the same period (July 1995 to April 1998), the rate of noncompliant homes receiving a grace period nationwide was 99 percent, indicating that the practice of granting a grace period to nearly all noncompliant homes is common across all states. Moreover, data we analyzed on actions taken against California homes cited repeatedly for harming residents suggest that DHS does not take into account a home’s compliance history when determining whether to impose intermediate sanctions. Of the 122 homes in our analysis cited repeatedly for harming residents, 73 percent were not federally sanctioned. In the case of such homes—cited in consecutive surveys for actual harm or immediate jeopardy violations—granting a grace period with no further disciplinary action appears to be a highly questionable practice. Table 2 illustrates a home with the same violations cited 4 years in a row—thus not sustaining compliance from one standard survey to the next—and still receiving a grace period to correct its deficiencies after each survey. Although HCFA has the authority to terminate homes from participation in Medicare and Medicaid if they fail to resume compliance, termination rarely occurs and is not as final as the term implies. In the recent past, California’s terminated homes have rarely closed for good. Of the 16 homes terminated in the 1995 to 1998 time period, 14 have been reinstated. Eleven have been reinstated under the same ownership they had before termination. Of the 14 reinstated homes, at least six have been cited since their reinstatement with new deficiencies that harmed residents, such as failure to prevent avoidable accidents, failure to prevent avoidable weight loss, and improper treatment of pressure sores. A home that reapplies for participation is required to have two consecutive on-site reviews—called reasonable assurance surveys—within 6 months to determine whether it is in substantial compliance with federal regulations before its eligibility to bill federal programs can be reinstated. However, HCFA has not always ensured that homes are in substantial compliance before reinstatement. For example, one home terminated on April 15, 1997, had two reasonable assurance surveys on April 25 and May 28, 1997. Although the nursing home was not in substantial compliance at the time of the second survey, HCFA considered the deficiencies minor enough to reinstate the home on June 5, 1997. The consequence of termination—stopping reimbursement for the home’s Medicare and Medicaid beneficiaries—was in effect for no longer than 3 weeks. About 3 months after reinstatement, however, the home was cited for harming residents. DHS surveyors investigating a complaint found immediate jeopardy violations as a result of a dangerously low number of nursing home staff. In addition, surveyors cited the facility for providing substandard care. Residents who could not move independently, some with pressure sores, were left sitting in urine and feces for long periods of time; some residents were not getting proper care for urinary tract infections; and surveyors cited the home’s infection control program as inadequate. By 1997, California DHS officials recognized that the state, in combination with HCFA’s regional office, had not dealt effectively with persistently and seriously noncompliant nursing homes using the OBRA 87 enforcement process. The process discouraged immediate application of enforcement actions. It allowed nursing homes to come back into compliance for a short period of time, escaping enforcement action altogether. In many instances, though, homes did not sustain compliance for a significant period of time. Therefore, in July 1998 and with HCFA’s agreement, DHS began a “focused enforcement” process that combines state and federal authority and action, targeting providers with the worst compliance records for special attention. As a start, DHS has identified about 34 homes with the worst compliance histories—generally two in each of its districts. Officials intend to conduct standard surveys of these homes about every 6 months rather than every 9 to 15 months. In addition, DHS intends to conduct more complete on-site reviews of facilities for all complaints received about these homes. DHS and HCFA told us that they do not intend to accept such homes’ self-reports of compliance without a revisit. DHS officials told us that the agency is developing procedures—consistent with HCFA regulations implementing OBRA 87 reforms—to ensure that, where appropriate, the state will immediately recommend and HCFA will impose civil monetary penalties and other strong sanctions to bring such homes into compliance and keep them compliant. For focused enforcement homes unable to sustain compliance, state officials plan to revoke their state licenses and recommend termination from the Medicare and Medicaid programs. In addition, DHS plans to screen the compliance history of facilities by owner—both in California and nationally—before granting new licenses to operate nursing homes in the state. State officials told us that they will require all facilities with the same owner to be in substantial compliance before any new licenses are granted. The responsibility to protect nursing home residents, among the most vulnerable members of our society, rests with nursing homes and with HCFA and the states. In a number of cases, this responsibility has not been met in California. We and state surveyors found cases in which residents who needed help were not provided basic care—not helped to eat or drink; not kept dry and clean; not repositioned to prevent pressure sores; not monitored for the development of urinary tract infections; and not given pain medication when needed. When such basic care is not provided, residents may suffer unnecessarily. As serious as the identified care problems are, weaknesses in federal and state oversight of nursing homes raise the possibility that many care problems escape the scrutiny of surveyors. Homes can prepare for surveyors’ annual visits because of the visits’ predictable timing. Homes can also adjust resident records to improve the overall impression of the home’s care. In addition, DHS surveyors may overlook significant findings because the federal survey protocol they follow does not rely on an adequate sample for detecting potential problems and their prevalence. Together, these factors can mask significant care problems from the view of federal and state regulators. Furthermore, HCFA needs to reconsider its enforcement approach toward homes with serious, recurring violations. Federal policies allowing a grace period to correct deficiencies and to accept a home’s report of compliance without an on-site review can be useful policies, given resource constraints, when applied to homes with less serious problems. However, even with resource constraints, HCFA and DHS need to ensure that their enforcement efforts are directed to homes with serious and recurring violations and that policies developed for homes with less serious problems are not applied to them. Under current policies and practices, noncompliant homes that DHS identifies as having harmed or put residents in immediate danger have little incentive to sustain compliance, once achieved, because they may face no consequences for their next episode of noncompliance. Our findings regarding homes that repeatedly harmed residents or were reinstated after termination suggest that the goal of sustained compliance has not been met. Failure to bring such homes into compliance limits the ability of federal and state regulators to protect the welfare and safety of residents. In order to better protect the health, safety, welfare, and rights of nursing home residents and ensure that nursing homes sustain compliance with federal requirements, we recommend that the HCFA Administrator revise federal guidance and ensure state agency compliance through taking the following actions: Stagger or otherwise vary the scheduling of standard surveys to effectively reduce the predictability of surveyors’ visits; the variation could include segmenting the standard survey into more than one review throughout the 12- to 15-month period, which would provide more opportunities for surveyors to observe problematic homes and initiate broader reviews when warranted. Revise federal survey procedures to instruct surveyors to take stratified random samples of resident cases and review sufficient numbers and types of resident cases so that surveyors can better detect problems and assess their prevalence. Eliminate the grace period for homes cited for repeated serious violations and impose sanctions promptly, as permitted under existing regulations. Require that for problem homes with recurring serious violations, state surveyors substantiate, by means of an on-site review, every report to HCFA of a home’s resumed compliance status. We sought comments on a draft of this report from HCFA and DHS (whose written comments are reproduced in appendixes II and III), experts on nursing home care, and representatives from the nursing home industry. The reviewers generally agreed that the findings were troubling and that improvements were needed in the federal survey and enforcement process to better protect residents’ health and safety. Reviewers also suggested technical changes, which we included in the report as appropriate. HCFA officials informed us that they are planning to make significant modifications in their survey and enforcement processes, which they believe will address our recommendations. HCFA concurred with the recommendation to eliminate the grace period for homes with repeated serious violations and agreed that having a more scientifically selected and larger case review sample would improve the ability of surveyors to detect poor care in nursing homes. HCFA also agreed to change its revisit policy for homes that are seriously noncompliant. HCFA agreed in principle that quality of care needs to be monitored outside the bounds of an annual, standard survey and acknowledged that certain factors can affect the predictability of surveys. These factors include the time of day and day of week the survey begins as well as the timing of surveys for homes in a given locale. Based on its analysis of certain OSCAR data, however, HCFA disagreed that states are not varying their survey schedules. We believe that basing a conclusion about the predictability of the annual survey primarily on analysis of OSCAR data is problematic, given weaknesses we identified in the classification of surveys entered into the database. Given these questions we raised, HCFA agreed to review the validity of the OSCAR data. HCFA also raised concerns—as did DHS—that segmenting the survey into two or more reviews would make it less effective and more expensive. We believe that segmenting the survey could largely eliminate concern about predictability and, by increasing the frequency of surveyors’ visits to homes, could provide more opportunity to observe problematic homes and initiate broader reviews when warranted. These advantages should be evaluated relative to the potential disadvantages that concern HCFA. DHS officials generally agreed with our findings and recommendations. They attributed many of the problems in the current survey and enforcement process to federal policy directives that, they maintain, have weakened states’ ability to oversee quality of care and quality of life in nursing homes. In its comments, DHS has also suggested a number of additional changes it believes would improve the federal survey and enforcement process. These include adding a waiting period before homes terminated from Medicare and Medicaid could be reinstated in the programs, changing HCFA’s definitions of scope of violations and of substandard care to more realistically reflect the seriousness of poor care, changing HCFA’s revisit policy for homes that are not in substantial compliance, developing a peer review of survey and enforcement practices in different regions, improving the database used for enforcement tracking, and more fully funding survey and enforcement activities for the state. Some reviewers questioned whether the scope of our clinical review of 1993 records and concurrent review of nursing homes was sufficient to permit drawing conclusions about the current condition of all California nursing homes. These aspects of our methodology—while important—were not the primary basis for reaching our conclusions. The most comprehensive and compelling evidence we analyzed was recent standard survey reports of California’s own surveyors, the statewide database DHS maintains on complaint investigations, and the nationwide database HCFA maintains on nursing home deficiencies. In response to these comments, we modified the report to better clarify our methodology and the primary basis for our findings. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until July 28, 1998. At that time, we will make copies of this report available to interested parties upon request. Please contact me or Kathryn Allen, Associate Director, at (202) 512-7114 if you or your staff have any further questions. This report was prepared by Jack Brennan, Scott Berger, Mary Ann Curran, C. Robert DeRoy, Gloria Eldridge, and Hannah Fein, under the direction of Sheila Avruch. Concerned about the life-threatening potential of the recent allegations, you asked us to determine whether the allegations had any merit and whether the monitoring of California’s nursing homes has been adequate to protect residents. More specifically, we assessed (1) whether, as alleged, residents who died in 1993 from certain causes had received unacceptable care that could have endangered their health and safety, and whether serious care problems currently exist; (2) the adequacy of federal and state efforts in monitoring nursing home care through annual surveys; and (3) the effectiveness of federal and state efforts to enforce sustained compliance with federal nursing home requirements. We reviewed the medical records of a sample of the 3,113 residents alleged to have died avoidable deaths in 1993 in 971 California nursing homes from malnutrition, dehydration, urinary tract infection (UTI), bowel obstruction, or bedsores (pressure sores). We met with those making the allegations, and from them we obtained copies of the death certificates of the 3,113 residents. To select our sample, we eliminated residents with UTI who did not also suffer from septicemia (the presence of bacteria and toxins in the blood), because if these conditions are not present, UTI is generally not lethal. We assumed that if care was a problem in a home, more than one resident would have been affected. We therefore excluded death certificates for residents of homes with (1) fewer than five such deaths and (2) for such deaths, a deaths-to-total-beds ratio of less than 5 percent. That left a universe of 546 residents at 72 homes. In addition, we eliminated residents who died in counties having few nursing homes. After these exclusions, our universe became 446 residents at 59 homes, from which we selected a preliminary sample of 75 residents from 15 homes. Fourteen of these homes were freestanding and one was a hospital-based nursing home. Because we selected from residents of homes with five or more such deaths in certain counties, our results cannot be generalized to the universe of all residents in California nursing homes who died of the same causes in 1993. To review the medical records, we used two registered nurses with advanced degrees in gerontological nursing and with expertise in clinical nursing home care and data abstraction. To guide them, another registered nurse on our staff developed a detailed structured data collection instrument. The nurses’ work was reviewed by the registered nurse on our staff, who has experience working in nursing homes and judging whether care met acceptable clinical standards. This second review focused on a critical examination of all cases where the first team of registered nurses identified residents as having unacceptable care, in order to exclude any cases that might be questionable rather than unacceptable. The registered nurse on our staff also discussed some of the cases with physicians and additional registered nurses specializing in geriatric care to further clarify whether care was acceptable. We excluded all questionable cases from the unacceptable care group. Because of the time needed to thoroughly review each resident’s complete clinical history (some were more than 600 pages), the nurses reviewed 62 of the 75 records initially selected from 1993. To determine the extent of deficiencies identified by state surveyors in California nursing homes since July 1995, and to identify enforcement actions taken in response to the deficiencies, we used two databases. The first, HCFA’s On-Line Survey, Certification, and Reporting (OSCAR) System, contains information about violations of federal requirements that a home has received in its last four surveys. The second, the Automated Certification and Licensing Administrative Information Management System (ACLAIMS) database, is maintained by California’s DHS and contains information on each home’s violations of state requirements. In addition, we used data that HCFA’s San Francisco regional office maintains separately from OSCAR on federal sanctions imposed. In OSCAR, we identified 1,445 California homes that had survey data after July 1, 1995—the date the new OBRA 87 scope and severity system went into effect. If a nursing home at a particular address had more than one provider number, we included in our analysis only one of the provider numbers to represent that home. Of the 1,445 California homes, 1,370 of those homes (95 percent) had at least two surveys entered into the OSCAR database since July 1995. Information in the OSCAR database is constantly being updated. We downloaded OSCAR data on February 26, 1998, to get a fixed database for our analysis of 1,370 homes. We also continued to work with OSCAR on-line as necessary, for example, to download survey reports on particular homes. The nursing homes we analyzed included Medicare and Medicaid dually certified facilities, Medicare-only facilities, Medicaid-only facilities, and both freestanding and hospital-based facilities. To develop information shown in figures 2 and 3, we combined information from both the OSCAR and ACLAIMS databases. We did not conduct a thorough assessment of the validity or reliability of either OSCAR or ACLAIMS. We did determine, however, that OSCAR excludes data that could be useful in obtaining a complete picture of a nursing home’s history of deficiencies. For example, serious violations of state requirements discovered during complaint investigations are not routinely shown as federal deficiencies in OSCAR. Other information, such as the seriousness and extent of identified deficiencies, were missing from OSCAR in some cases. We found instances of missing information in 282 of the 1,370 homes in our analysis. The effect of these omissions from the database, we believe, is an understatement of documented deficiencies in OSCAR. To assess the effectiveness of the survey process, we accompanied California state surveyors on annual standard surveys conducted at two homes. To do this, we arranged for a team of registered nurses to accompany the DHS surveyors and conduct concurrent surveys using a protocol developed under a HCFA research contract designed specifically to identify quality-of-care problems. These nurses work with Andrew M. Kramer, M.D., of the University of Colorado’s Center on Aging Research Section of the Health Sciences Center, who developed the survey protocol for HCFA. Before conducting the concurrent surveys at these homes, we accompanied a state survey team to a third home to gather information on survey procedures. To better understand survey deficiencies, complaints, and enforcement, we reviewed selected records. We determined the types of problems being identified by surveyors by obtaining and analyzing annual standard surveys for 18 homes we visited. We also obtained and analyzed information about the number and type of complaints investigated by two district offices. To better understand enforcement efforts, we reviewed selected enforcement files and enforcement data kept by HCFA. We also interviewed responsible officials from HCFA headquarters in Baltimore and HCFA’s San Francisco regional office. We met with officials from California DHS in Sacramento and two district offices; the California Association of Health Facilities; the American Health Care Association; the American Association of Homes and Services for the Aging; the California Association of Homes and Services for the Aging; the California Advocates for Nursing Home Reform; California’s Office of Ombudsman; nursing home administrators and directors of nursing; geriatricians and registered nurses with expertise in nursing home issues; and families of nursing home residents. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed allegations that residents in California nursing homes are not receiving acceptable care, focusing on: (1) examining, through a medical record review, whether these allegations had merit and whether serious care problems currently exist; (2) reviewing the adequacy of federal and state efforts in monitoring nursing home care through annual surveys; and (3) assessing the effectiveness of federal and state efforts to enforce sustained compliance with federal nursing home requirements. GAO noted that: (1) despite the federal and state oversight infrastructure currently in place, certain California nursing homes have not been and currently are not sufficiently monitored to guarantee the safety and welfare of their residents; (2) GAO reached this conclusion primarily using data from federal surveys and state complaint investigations conducted by California's Department of Health Services (DHS) on 1,370 California homes, supplemented with more in-depth analysis of certain homes and certain residents' care; (3) GAO found that surveyors can miss problems that affect the safety and health of nursing home residents and that even when such problems are identified, enforcement actions do not ensure that they are corrected and do not recur; (4) with regard to allegations made about avoidable deaths in 1993, GAO's expert nurses' review of the 62 resident cases sampled found that residents in 34 cases received care that was unacceptable and that sometimes endangered their health and safety; (5) in the absence of autopsy information or other additional clinical evidence, GAO cannot be conclusive about the extent to which this unacceptable care may have contributed directly to individual deaths; (6) unacceptable care continues to be a problem in many homes; (7) GAO believes that the extent of serious care problems portrayed in federal and state data is likely to be understated; (8) GAO found that homes could generally predict when their annual on-site reviews would occur and, if inclined, could take steps to mask problems otherwise observable during normal operations; (9) GAO found irregularities in the homes' documentation of the care provided to their residents; (10) in visiting homes selected by California DHS officials, GAO found multiple cases in which DHS surveyors did not identify certain serious care problems; (11) surveyors missed these care problems because federal guidance on conducting surveys does not include sampling methods that can enhance the spotting of potential problems and help establish their prevalence; (12) the Health Care Financing Administration's (HCFA) enforcement policies have not been effective in ensuring that the deficiencies are corrected and remain corrected; (13) California's DHS grants all noncompliant homes, with some exceptions, a 30- to 45-day grace period, during which they may correct the deficiencies without penalty; (14) a substantial number of California's homes that have been terminated and later reinstated have soon thereafter been cited again for serious deficiencies; and (15) the problems GAO identified are indicative of systemic survey and enforcement weaknesses. |
In June 2011, we reported that S&T met some of its oversight requirements for T&E of acquisition programs we reviewed, but additional steps were needed to ensure that all requirements were met.Specifically, since DHS issued the T&E directive in May 2009, S&T reviewed or approved T&E documents and plans for programs undergoing testing, and conducted independent assessments for the programs that completed operational testing during this time period. S&T officials told us that they also provided input and reviewed other T&E documentation, such as components’ documents describing the programs’ performance requirements, as required by the T&E directive. DHS senior level officials considered S&T’s T&E assessments and input in deciding whether programs were ready to proceed to the next acquisition phase. However, S&T did not consistently document its review and approval of components’ test agents—a government entity or independent contractor carrying out independent operational testing for a major acquisition—or document its review of other component acquisition documents, such as those establishing programs’ operational requirements, as required by the T&E directive. For example, 8 of the 11 acquisition programs we reviewed had hired test agents, but documentation of S&T approval of these agents existed for only 3 of these 8 programs. We reported that approving test agents is important to ensure that they are independent of the program and that they meet requirements of the T&E directive. S&T officials agreed that they did not have a mechanism in place requiring a consistent method for documenting their review or approval and the extent to which the review or approval criteria were met. We reported that without mechanisms in place for documenting its review or approval of acquisition documents and T&E requirements, such as approving test agents, it is difficult for DHS or a third party to review and validate S&T’s decision-making process and ensure that it is overseeing components’ T&E efforts in accordance with acquisition and T&E directives and internal control standards for the federal government. As a result, we recommended that S&T develop a mechanism to document both its approval of operational test agents and component acquisitions documentation to ensure that these meet the requirements of the DHS T&E directive. S&T concurred and reported that the agency has since developed internal procedures to ensure that the approval of test agents and component acquisition documents are documented. We also reported in June 2011 that S&T and DHS component officials stated that they face challenges in overseeing T&E across DHS components which fell into 4 categories: (1) ensuring that a program’s operational requirements—the key performance requirements that must be met for a program to achieve its intended goals—can be effectively tested; (2) working with DHS component program staff who have limited T&E expertise and experience; (3) using existing T&E directives and guidance to oversee complex information technology acquisitions; and (4) ensuring that components allow sufficient time for T&E while remaining within program cost and schedule estimates. Both S&T and DHS, more broadly, have begun initiatives to address some of these challenges, such as establishing a T&E council to disseminate best practices to component program managers, and developing specific guidance for testing and evaluating information technology acquisitions. In addition, as part of S&T’s recent reorganization, the agency has developed a new division specifically geared toward assisting components in developing requirements that can be tested, among other things. However, since these efforts have only recently been initiated to address these DHS-wide challenges, it is too soon to determine their effectiveness. Since 2009, S&T has undertaken a series of efforts related to its organizational structure. S&T underwent a new strategic planning process, developed new strategic goals, and conducted a reorganization intended to better achieve its strategic goals. These efforts were implemented after a 2009 National Academy of Public Administration study found that S&T’s organizational structure posed communication challenges across the agency and that the agency lacked a cohesive strategic plan and mechanisms to assess performance in a systematic way, among other things. In August 2010, S&T reorganized to align its structure with its top strategic goals, allow for easier interaction among senior leadership, and reduce the number of personnel directly reporting to the Under Secretary of S&T. Additionally, after the Under Secretary was confirmed in November 2009, S&T instituted a new strategic planning process which helped inform the development of new strategic goals. The new strategic goals announced in August 2010 include: rapidly developing and delivering knowledge, analyses, and innovative solutions that advance the mission of DHS; leveraging its expertise to assist DHS components’ efforts to establish operational requirements, and select and acquire needed technologies; strengthening the Homeland Security Enterprise and First Responders’ capabilities to protect the homeland and respond to disasters; conducting, catalyzing, and surveying scientific discoveries and inventions relevant to existing and emerging homeland security challenges; and fostering a culture of innovation and learning in S&T and across DHS that addresses mission needs with scientific, analytic, and technical rigor. According to S&T, the agency has developed a draft strategic plan that provides its overall approach to meeting these strategic goals, which is currently in the process of being finalized. Moreover, according to testimony by the Undersecretary of S&T in March 2011, to ensure that individual R&D projects are meeting their goals, S&T has committed to an annual review of its portfolio of basic and applied R&D and all proposed “new start” projects. According to S&T, the review process uses metrics determined by S&T, with input from DHS components, that are aligned with DHS priorities. These metrics consider: the impact on the customer’s mission; the ability to transition these products to the field; whether the investment positions S&T for the future; whether the projects are aligned with customer requirements; whether S&T has the appropriate level of customer interaction; and whether S&T is sufficiently innovative in the way it is approaching its challenges. We are currently reviewing DHS and S&T’s processes for prioritizing, coordinating, and measuring the results of its R&D efforts for the Senate Committee on Homeland Security and Governmental Affairs and we will report on this issue next year. Our prior work related to R&D at other federal agencies could provide insight for S&T as it moves forward with new structures and processes operating within potential fiscal constraints. During the 1990s, we issued a series of reports on federal efforts to restructure R&D in the wake of changing priorities and efforts to balance the federal budget. More recently, we have issued reports on R&D issues at the Department of Defense (DOD), Department of Energy (DOE), the Environmental Protection Agency (EPA), and DHS. Although the specific recommendations and issues vary from department to department, there are key findings across this body of work that could potentially help inform S&T’s efforts to meet DHS’s R&D needs, as well as Congressional oversight of these activities. Since our assessment of R&D efforts at DHS is currently under way, we have not determined the extent to which these key findings from our prior work are applicable to DHS’s R&D efforts or the extent to which DHS already has similar efforts under way. However, our prior work could provide valuable insights into how DHS could leverage the private sector to help conduct R&D, restructure R&D efforts in response to fiscal constraints, and develop comprehensive strategies to mitigate the risk of duplication and overlap. For example: We reported on federal agencies that have restructured their research and development efforts in response to fiscal constraints. For example, in January 1998, we reported on efforts by federal agencies, such as DOD, the DOE National Laboratories, and NASA, to streamline their R&D activities and infrastructure. We reported that restructuring research, development, testing and evaluation to meet current and future needs required interagency agreements and cross- agency efforts, in addition to ongoing individual efforts. Additionally, we reported on five elements that were useful in the successful restructuring of R&D in corporate and foreign government organizations. For example, we found that successful restructuring of R&D activities included having a core mission that supports overall goals and strategies, clear definitions of those responsible for supporting that mission, and accurate data on total costs of the organization’s activities. In addition, we have reported that comprehensive strategies mitigate risk of duplication and overlap. For example, we reported in March 2011 that DOD did not have a comprehensive approach to manage and oversee the breadth of its activities for developing new capabilities in response to urgent warfighter needs, including entities engaged in experimentation and rapid prototyping to accelerate the transition of technologies to the warfighter, and lacked visibility over the full range of its efforts.issue guidance that defined roles, responsibilities, and authorities across the department to lead its efforts. DOD agreed with this recommendation. As a result, we recommended that DOD Within DHS itself, we reported in May 2004 that DHS did not have a strategic plan to guide its R&D efforts. We recommended that DHS complete a strategic R&D plan and ensure that the plan was integrated with homeland security R&D conducted by other federal agencies. We also recommended that DHS develop criteria for distributing annual funding and for making long-term investments in laboratory capabilities, as well as develop guidelines that detailed how DOE’s laboratories would compete for funding with private sector and academic entities. DHS agreed with our recommendations. While S&T developed a 5-year R&D plan in 2008 to guide its efforts and is currently finalizing a new strategic plan to align its own R&D investments and goals, DHS has not yet completed a strategic plan to align all R&D efforts across the department, as we previously recommended. Our work on DOE National Laboratories provides additional insights related to oversight of R&D efforts that could be useful for DHS S&T. In 1995, we reported that DOE’s national laboratories did not have clearly defined missions focused on accomplishing DOE’s changing objectives and national priorities. DOE, at that time, managed the national laboratories on a program by program basis which inhibited cooperation across programs and hindered DOE’s ability to use the laboratories to meet departmental missions. We recommended, among other things, that DOE develop a strategy that maximized the laboratories’ resources. In responding, DOE said that it had undertaken a new strategic planning process which resulted in a strategic plan. Though DOE developed a strategic plan intended to integrate its missions and programs, in 1998 we reported that the laboratories did not function as an integrated national research and development system and recommended that DOE develop a comprehensive strategy to be used to assess success in meeting objectives, monitor progress, and report on that progress. DOE acknowledged that it needed to better focus the laboratories’ missions and tie them to the annual budget process, but that it would take time to accomplish. More recently, we reported in June 2009 that DOE could not determine the effectiveness of its laboratories' technology transfer efforts because it has not yet defined its overarching strategic goals for technology transfer and lacks reliable performance data. Instead, individual DOE programs such as the National Nuclear Security Administration and DOE's Office of Science articulated their own goals for technology transfer at the national laboratories. We recommended, among other things, that DOE articulate department wide priorities and develop clear goals, objectives, and performance measures. DOE generally agreed with our findings. Lastly, our work on Environmental Protection Agency (EPA) laboratory facilities also offers insights into the importance of planning and coordination in managing R&D. Specifically, we reported in July 2011 that EPA has yet to fully address the findings of numerous past studies that have examined EPA’s science activities. These past evaluations noted the need for EPA to improve long-term planning, priority setting, and coordination of laboratory activities, establish leadership for agency wide scientific oversight and decision making, and better manage the laboratories’ workforce and infrastructure. We recommended, among other things, that EPA develop a coordinated planning process for its scientific activities and appoint a top-level official with authority over all the laboratories, improve physical and real property planning decisions, and develop a workforce planning process for all laboratories that reflects current and future needs of laboratory facilities. EPA generally agreed with our findings and recommendations. Chairman Lungren, Ranking Member Clarke, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For questions about this statement, please contact David C. Maurer at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Chris Currie, Assistant Director; Emily Gunn and Margaret McKenna. Key contributors for the previous work that this testimony is based on are listed within each individual product. Homeland Security: DHS Could Strengthen Acquisitions and Development of New Technologies. GAO-11-829T. Washington, D.C.: July 15, 2011. Environmental Protection Agency: To Better Fulfill Its Mission, EPA Needs a More Coordinated Approach to Managing Its Laboratories, GAO-11-347. Washington, D.C.: July 25, 2011. DHS Science and Technology: Additional Steps Needed to Ensure Test and Evaluation Requirements Are Met. GAO-11-596. Washington, D.C.: June 15, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue, GAO-11-318SP. Washington, D.C.: Mar. 2011. Homeland Security: Improvements in Managing Research and Development Could Help Reduce Inefficiencies and Costs, GAO-11-464T. Washington D.C.: Mar. 15, 2011. Warfighter Support: DOD’s Urgent Needs Processes Need a More Comprehensive Approach and Evaluation for Potential Consolidation, GAO-11-273. Washington, D.C.: Mar. 1, 2011. Technology Transfer: Clearer Priorities and Greater Use of Innovative Approaches Could Increase the Effectiveness of Technology Transfer at Department of Energy Laboratories GAO-09-548. Washington, D.C.: June 16, 2009. Homeland Security: DHS Needs a Strategy to Use DOE’s Laboratories for Research on Nuclear, Biological, and Chemical Detection and Response Technologies. GAO-04-653. Washington, D.C.: May 24, 2004. Department of Energy: Uncertain Progress in Implementing National Laboratory Reforms, GAO/RCED-98-197. Washington, D.C.: Sept. 10, 1998. Best Practices: Elements Critical to Successfully Reducing Unneeded RDT&E Infrastructure. GAO/NSIAD/RCED-98-23. Washington, D.C.: Jan. 8, 1998. Department of Energy: National Laboratories Need Clearer Missions and Better Management, GAO/RCED-95-10. Washington, D.C.: Jan. 27, 1995. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses our prior work examining the Department of Homeland Security's (DHS) Science and Technology Directorate (S&T) and Research and Development (R&D) efforts. The Homeland Security Act of 2002 created DHS and, within it, established S&T with the responsibility for conducting national research, development, test and evaluation (T&E) of technology and systems for, among other things, detecting, preventing, protecting against, and responding to terrorist attacks. Since its creation in 2003, DHS, through both S&T and its components, has spent billions of dollars researching and developing technologies used to support a wide range of missions including securing the border, detecting nuclear devices, and screening airline passengers and baggage for explosives, among others. S&T has a wide-ranging mission, which includes conducting basic and applied research of technologies, and overseeing the testing and evaluation of component acquisitions and technologies to ensure that they meet DHS acquisition requirements before implementation in the field. In recent years, we have reported that DHS has experienced challenges in managing its multibillion-dollar technology development and acquisition efforts, including implementing technologies that did not meet intended requirements and were not appropriately tested and evaluated. These problems highlight the important role that S&T plays in overseeing DHS testing and evaluation. S&T has reorganized to better achieve its goals and provide better assistance to DHS components in developing technologies. In addition to the challenge of implementing its varied mission, S&T is also managing a decline in available R&D resources. S&T's fiscal year 2011 appropriation decreased 20 percent from fiscal year 2010 and, while its fiscal year 2012 appropriation has not yet been enacted, both the House and Senate marks for the agency are lower than what was appropriated in fiscal year 2011. As a result, S&T has had to adjust resources and re-prioritize its efforts. In the past, we have reported on issues related to the transformation and reorganization of R&D efforts in the federal government, particularly related to shifting of priorities and managing a reduction in resources. In addition, we identified DHS R&D as an area for potential costs savings in our March 2011 report regarding opportunities to reduce potential duplication in government programs, save tax dollars, and enhance revenue. Specifically, we reported that DHS could take further actions to improve its management of R&D and reduce costs by ensuring that testing efforts are completed before making acquisition decisions and cost-benefit analyses are conducted to reduce R&D inefficiencies and costs. The testimony today focuses on the key findings from our prior work related to S&T's test and evaluation efforts, S&T's recent reorganization efforts, and key findings from our past work related to federal R&D. Specifically, this statement will address: (1) the extent to which S&T oversees T&E of major DHS acquisitions and what challenges, if any, S&T officials report facing in overseeing T&E across DHS; and (2) S&T's recent reorganization efforts and how key findings from our prior work on R&D in the federal government can inform how S&T moves forward. This statement is based on reports and testimonies we issued from March 1995 to July 2011 related to DHS's efforts to manage, test, and deploy various technology programs; transformation of federal R&D; and selected updates conducted from July 2011 to the present related to S&T's reorganization efforts.. In June 2011, we reported that S&T met some of its oversight requirements for T&E of acquisition programs we reviewed, but additional steps were needed to ensure that all requirements were met. Specifically, since DHS issued the T&E directive in May 2009, S&T reviewed or approved T&E documents and plans for programs undergoing testing, and conducted independent assessments for the programs that completed operational testing during this time period. S&T officials told us that they also provided input and reviewed other T&E documentation, such as components' documents describing the programs' performance requirements, as required by the T&E directive. DHS senior level officials considered S&T's T&E assessments and input in deciding whether programs were ready to proceed to the next acquisition phase. However, S&T did not consistently document its review and approval of components' test agents--a government entity or independent contractor carrying out independent operational testing for a major acquisition--or document its review of other component acquisition documents, such as those establishing programs' operational requirements, as required by the T&E directive. We also reported in June 2011 that S&T and DHS component officials stated that they face challenges in overseeing T&E across DHS components which fell into 4 categories: (1) ensuring that a program's operational requirements--the key performance requirements that must be met for a program to achieve its intended goals--can be effectively tested; (2) working with DHS component program staff who have limited T&E expertise and experience; (3) using existing T&E directives and guidance to oversee complex information technology acquisitions; and (4) ensuring that components allow sufficient time for T&E while remaining within program cost and schedule estimates. Since 2009, S&T has undertaken a series of efforts related to its organizational structure. S&T underwent a new strategic planning process, developed new strategic goals, and conducted a reorganization intended to better achieve its strategic goals. These efforts were implemented after a 2009 National Academy of Public Administration study found that S&T's organizational structure posed communication challenges across the agency and that the agency lacked a cohesive strategic plan and mechanisms to assess performance in a systematic way, among other things. In August 2010, S&T reorganized to align its structure with its top strategic goals, allow for easier interaction among senior leadership, and reduce the number of personnel directly reporting to the Under Secretary of S&T. |
According to CBP, the ease and speed with which a cross-border violator can travel to the border, cross the border, and leave the location of the crossing, are critical factors in determining whether an area of the border is vulnerable. We identified state roads close to the border that appeared to be unmanned and unmonitored, allowing us to simulate the cross- border movement of radioactive materials or other contraband from Canada into the United States. We also located several ports of entry that had posted daytime hours and which, although monitored, were unmanned overnight. Investigators observed that surveillance equipment was in operation but that the only observable preventive measure to stop a cross-border violator from entering the United States was a barrier across the road that could be driven around. CBP provided us with records that confirmed our observations, indicating that on one occasion a cross- border violator drove around this type of barrier to illegally enter the United States. The violator was later caught by state law enforcement officers and arrested by the U.S. Border Patrol. We found state roads close to the U.S.–Canada border in several states. Many of the roads we found appeared to be unmanned and unmonitored, allowing us to simulate the cross-border movement of radioactive materials or other contraband from Canada into the United States. On October 31, 2006, our investigators positioned themselves on opposite sides of the U.S.–Canada border in an unmanned location. Our investigators selected this location because roads on either side of the border would allow them to quickly and easily exchange simulated contraband. After receiving a signal via cell phone, the investigator in Canada left his vehicle and walked approximately 25 feet to the border carrying a red duffel bag. While investigators on the U.S. side took photographs and made a digital video recording, the individual with the duffel bag proceeded the remaining 50 feet, transferred the duffel bag to the investigators on the U.S. side, and returned to his vehicle on the Canadian side (see fig. 1). The set up and exchange lasted approximately 10 minutes, during which time the investigators were in view of residents both on the Canadian and U.S. sides of the border. According to CBP records of this incident, an alert citizen notified the U.S. Border Patrol about the suspicious activities of our investigators. The U.S. Border Patrol subsequently attempted to search for a vehicle matching the description of the rental vehicle our investigators used. However, the U.S. Border Patrol was not able to locate the investigators with the duffel bag, even though they had parked nearby to observe traffic passing through the port of entry. Investigators identified over a half dozen locations in this area where state roads ended at the U.S.–Canada border. Although investigators took pictures of the border area, they did not attempt to cross the border because of private property concerns. There was no visible U.S. Border Patrol response to our activities and no visible electronic monitoring equipment. CBP told us that the activities of our investigators would not be grounds for a formal investigation. Still, according to CBP records, criminals are aware of vulnerabilities in this area and have taken advantage of the access provided by roads close to the border. For example, appendix I details an incident on January 25, 2007, in which an alert citizen notified CBP about suspicious activities on the citizen’s property, leading to the arrest of several cross-border violators. On November 15, 2006, our investigators visited an area in this state where state roads ended at the U.S.–Canada border. One of our investigators simulated the cross-border movement of radioactive materials or other contraband by crossing the border north into Canada and then returning to the United States (see fig. 2). There did not appear to be any monitoring or intrusion alarm system in place at this location, and there was no U.S. Border Patrol response to our border crossing. On December 5, 2006, our investigators traveled along a road parallel to the U.S.–Canada border. This road is so close to the border that jumping over a ditch on the southern side of the road allows an individual to stand in the United States. While driving the length of this road on the Canadian side, our investigators noticed cameras placed at strategic locations on the U.S. side of the border. They also observed U.S. Border Patrol vehicles parked at different locations along the border. At a location that appeared to be unmanned and unmonitored, one investigator left the vehicle carrying a red duffel bag. He crossed the ditch and walked into the United States for several hundred feet before returning to the vehicle. Our investigators stayed in this location for about 15 minutes, but there was no observed response from law enforcement. At two other locations, investigators crossed into the United States to find out whether their presence would be detected. In all cases, there was no observed response from law enforcement. We identified several ports of entry with posted daytime hours in a state on the northern border. During the daytime these ports of entry are staffed by CBP officers. During the night, CBP told us that it relies on surveillance systems to monitor, respond to, and attempt to interdict illegal border crossing activity. On November 14, 2006, at about 11:00 p.m., our investigators arrived on the U.S. side of one port of entry that had closed for the night. Investigators observed that surveillance equipment was in operation but that the only visible preventive measure to stop an individual from entering the United States was a barrier across the road that could be driven around. Investigators stayed at the port of entry for approximately 12 minutes to see whether the U.S. Border Patrol would respond. During this time, the investigators walked around the port of entry area and took photographs. When the U.S. Border Patrol did not arrive at the port of entry, our investigators returned south, only to have a U.S. Border Patrol agent pull them over 3 miles south of the port of entry. When questioned by the U.S. Border Patrol agent, our investigators indicated that they were federal investigators testing security procedures at the U.S. border. The agent did not ask for identification from our investigators and glanced only briefly at the badge and commission book the driver offered for inspection. In addition, he did not attempt to search the vehicle, ask what agency our investigators worked for, or record their names. According to DHS, the agent acted in a manner consistent with operational protocol because he was satisfied with the credentials presented to him and did not have probable cause to search the vehicle. CBP provided us with records concerning this incident. According to the records, the agent was dispatched because of the suspicious activities of our investigators in front of the port of entry camera. The records indicated that after this incident, CBP staff researched the incident fully to determine whether our investigators posed a threat. By performing an Internet search on the name of the investigator who rented the vehicle, CBP linked the investigators to GAO. CBP also provided us with records that confirmed our observations about the barrier at this port of entry, indicating that on one occasion a cross-border violator drove around this type of barrier to illegally enter the United States. The violator was later caught by state law enforcement officers and arrested by the U.S. Border Patrol. Safety considerations prevented our investigators from performing the same assessment work on the U.S.–Mexico border as performed on the northern border. In contrast to our observations on the northern border, our investigators observed a large law enforcement and Army National Guard presence near a state road on the southern border, including unmanned aerial vehicles. However, our limited security assessment also identified potential security vulnerabilities on federally managed lands adjacent to the U.S.–Mexico border. These areas did not appear to be monitored or have a noticeable law enforcement presence during the time our investigators visited the sites. Although CBP is ultimately responsible for protecting these areas, officials told us that certain legal, environmental, and cultural considerations limit options for enforcement. On October 17, 2006, two of our investigators left a main U.S. route about a quarter mile from a U.S.–Mexico port of entry. Traveling on a dirt road that parallels the border, our investigators used a GPS system to get as close to the border as possible. Our investigators passed U.S. Border Patrol agents and U.S. Army National Guard units. In addition, our investigators spotted unmanned aerial vehicles and a helicopter flying parallel to the border. At the point where the dirt road ran closest to the U.S.–Mexico border, our investigators spotted additional U.S. Border Patrol vehicles parked in a covered position. About three-fourths of a mile from these vehicles, our investigators pulled off the road. One investigator exited the vehicle and proceeded on foot through several gulches and gullies toward the Mexican border. His intent was to find out whether he would be questioned by law enforcement agents about his activities. He returned to the vehicle after 15 minutes, at which time our investigators returned to the main road. Our investigators did not observe any public traffic on this road for the 1 hour that they were in the area, but none of the law enforcement units attempted to stop our investigators and find out what they were doing. According to CBP, because our investigators did not approach from the direction of Mexico, there would be no expectation for law enforcement units to question these activities. (See fig. 3.) Investigators identified potential security vulnerabilities on federally managed land adjacent to the U.S.–Mexico border. These areas did not appear to be monitored or have a manned CBP presence during the time our investigators visited the sites. Investigators learned that a memorandum of understanding exists between DHS (of which CBP is a component), Interior, and USDA regarding the protection of federal lands adjacent to U.S. borders. Although CBP is ultimately responsible for protecting these areas, officials told us that certain legal, environmental, and cultural considerations limit options for enforcement—for example, environmental restrictions and tribal sovereignty rights. On January 9, 2007, our investigators entered federally managed land adjacent to the U.S.–Mexico border. The investigators had identified a road running parallel to the border in this area. Our investigators were informed by an employee of a visitor center that because the U.S. government was building a fence, the road was closed to the public. However, our investigators proceeded to the road and found that it was not physically closed. While driving west along this road, our investigators did not observe any surveillance cameras or law enforcement vehicles. A 4-foot-high fence (appropriate to prevent the movement of a vehicle rather than a person) stood at the location of the border. Our investigators pulled over to the side of the road at one location. To determine whether he would activate any intrusion alarm systems, one investigator stepped over the fence, entered Mexico, and returned to the United States. The investigators remained in the location for approximately 15 minutes but there was no observed law enforcement response to their activities. On January 23, 2007, our investigators arrived on federally managed lands adjacent to the U.S.–Mexico border. In this area, the Rio Grande River forms the southern border between the United States and Mexico. After driving off-road in a 4x4 vehicle to the banks of the Rio Grande, our investigators observed, in two locations, evidence that frequent border crossings took place. In one location, the investigators observed well-worn footpaths and tire tracks on the Mexican side of the river. At another location, a boat ramp on the U.S. side of the Rio Grande was mirrored by a boat ramp on the Mexican side. Access to the boat ramp on the Mexican side of the border had well-worn footpaths and vehicle tracks (see fig. 4). An individual who worked in this area told our investigators that at several times during the year, the water is so low that the river can easily be crossed on foot. Our investigators were in this area for 1 hour and 30 minutes and observed no surveillance equipment, intrusion alarm systems, or law enforcement presence. Our investigators were not challenged regarding their activities. According to CBP officials, in some locations on federally managed lands, social and cultural issues lead the U.S. Border Patrol to defer to local police in providing protection. This sensitivity to social and cultural issues appears to be confirmed by the provisions of the memorandum of understanding between DHS, Interior, and USDA. On February 23, 2007, we met with CBP officials to discuss the results of our investigation. CBP officials clarified their approach to law enforcement in unmanned and unmonitored areas at the northern and southern U.S. borders, including an explanation of jurisdictional issues on federally managed lands. CBP indicated that resource restrictions prevent U.S. Border Patrol agents from investigating all instances of suspicious activity. They added that the northern border presents more of a challenge than the southern border and that many antiquated ports of entry exist. Our visits to the northern border show that CBP faces significant challenges in effectively monitoring the border and preventing undetected entry into the United States. Our work shows that a determined cross- border violator would likely be able to bring radioactive materials or other contraband undetected into the United States by crossing the U.S.–Canada border at any of the locations we investigated. CBP records indicate that it does successfully stop many individuals from crossing the border illegally, but our own observations and experiences (along with CBP’s acknowledgment of existing challenges) lead us to conclude that more human capital and technological capabilities are needed to effectively protect the northern border. Our observations on the southern border showed a significant disparity between the large law enforcement presence on state lands in one state and what seemed to be a lack of law enforcement presence on federally managed lands. Mr. Chairman and Members of the Committee, this concludes my statement. I would be pleased to answer any questions that you may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-7455 or kutzg@gao.gov. Contacts points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. This appendix details four cases where Customs and Border Protection (CBP) apprehended individuals who were engaged in suspicious activities on the northern and southern borders. According to CBP, U.S. Border Patrol agents followed proper protocols in responding to these incidents. We are summarizing these case studies—which CBP provided to us—to further illustrate challenges the U.S. Border Patrol faces. At about 3:20 a.m. on June 24, 2006, electronic surveillance equipment observed a vehicle arrive at the port of entry gate from the direction of Canada. The suspect got out of the vehicle and, after inspecting the area around the gate, returned to the vehicle and drove around the gate into the United States. U.S. Border Patrol agents were notified, along with state law enforcement. The state officer identified and stopped the vehicle while the U.S. Border Patrol agents were en route. U.S. Border Patrol agents arrived and arrested the suspect. The suspect was identified as a citizen of Albania and admitted to driving around the port of entry gate. The suspect had applied for asylum in the United States and been denied in 2001, at which point he had moved to Canada. Attempts to return the suspect to Canada failed, as he had no legal status in Canada. Suspect was held in jail pending removal proceedings. At about 6:00 p.m. on January 25, 2007, the U.S. Border Patrol was notified of suspicious activity on the U.S.–Canada border. U.S. residents on the border had observed a vehicle dropping off several individuals near their home. A U.S. Border Patrol agent proceeded to the area where residents had observed the suspicious activity. Once there, the agent followed footprints in the snow and discovered two suspects hiding among a stand of pine trees. The suspects were Columbian nationals, one male and one female. They indicated that a man was going to pick them up on the Canadian side of the border, and that a friend had driven them to the agreed-upon location on the U.S. side. Cell phone numbers retrieved from the suspect’s phone linked him to phone numbers belonging to a known alien smuggler in the area. The suspects said they intended to seek political asylum in Canada. They were sent to a detention facility after their arrest. On February 10, 2007, at about 2:00 a.m., U.S. Border Patrol surveillance equipment detected six suspects entering the United States from Canada. The suspects were walking south along railroad tracks. After a short foot chase, U.S. Border Patrol agents apprehended all six suspects—two individuals who were believed to be smugglers and a family of four. All the suspects were citizens of South Korea. According to interviews with the suspects, after the family arrived in Canada they were approached by an individual who said he could take them to the United States. He brought the family to a desolate area and introduced them to a male and a female, who they were to follow across the border. The individual then instructed the family to leave their luggage in the car and said that he would pick all six of them up in the United States. The wife and two children in the family were released for humanitarian reasons after apprehension, and the male was placed in detention. On May 3, 2007, at about 1:20 a.m., an alert citizen reported a possible illegal alien near the U.S.–Mexico border. The responding U.S. Border Patrol agent located the individual, who claimed to be a citizen of Uruguay. He said that he had used a variety of transportation means, including airplanes and buses, to travel from Uruguay to a Mexican city just south of the U.S. border. The individual claimed to have crossed the border by foot along with four other individuals. He then walked for 4 days through the desert. When he became dehydrated, he sought help at a nearby U.S. town. As a result, he was spotted by the alert citizen who notified the U.S. Border Patrol. The individual was scheduled to be removed from the country but requested a hearing before an immigration judge. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The possibility that terrorists and criminals might exploit border vulnerabilities and enter the United States poses a serious security risk, especially if they were to bring radioactive material or other contraband with them. Although Customs and Border Protection (CBP) has taken steps to secure the 170 ports of entry on the northern and southern U.S. borders, Congress is concerned that unmanned and unmonitored areas between these ports of entry may be vulnerable. In unmanned locations, CBP relies on surveillance cameras, unmanned aerial drones, and other technology to monitor for illegal border activity. In unmonitored locations, CBP does not have this equipment in place and must rely on alert citizens or other information sources to meet its obligation to protect the border. Today's testimony will address what GAO investigators found during a limited security assessment of seven border areas that were unmanned, unmonitored, or both--four at the U.S.-Canada border and three at the U.S.-Mexico border. In three of the four locations on the U.S.-Canada border, investigators carried a duffel bag across the border to simulate the cross-border movement of radioactive materials or other contraband. Safety considerations prevented GAO investigators from attempting to cross north into the United States from a starting point in Mexico. On the U.S.-Canada border, GAO found state roads close to the border that CBP did not appear to man or monitor. In some of these locations, the proximity of the road to the border allowed investigators to cross without being challenged by law enforcement, successfully simulating the cross-border movement of radioactive materials or other contraband into the United States from Canada. In one location on the northern border, the U.S. Border Patrol was alerted to GAO activities through the tip of an alert citizen. However, the responding U.S. Border Patrol agents were not able to locate GAO investigators. Also on the northern border, GAO investigators located several ports of entry that had posted daytime hours and were unmanned overnight. On the southern border, investigators observed a large law enforcement and Army National Guard presence on a state road, including unmanned aerial vehicles. Also, GAO identified federally managed lands that were adjacent to the U.S.-Mexico border. These areas did not appear to be monitored or did not have an observable law enforcement presence, which contrasted sharply with GAO observations on the state road. Although CBP is ultimately responsible for protecting federal lands adjacent to the border, CBP officials told GAO that certain legal, environmental, and cultural considerations limit options for enforcement--for example, environmental restrictions and tribal sovereignty rights. |
Over the last 15 years, the federal government’s increasing demand for IT has led to a dramatic rise in operational costs to develop, implement, and maintain systems and services. Annually, the federal government spends more than $80 billion on IT. While the use of IT has the potential to greatly improve service for federal employees and American taxpayers, it has also led to federal agencies’ reliance on custom IT systems that can—and have—become risky, costly, and unproductive mistakes. As part of a comprehensive effort to increase the operational efficiency of federal IT systems and deliver greater value to taxpayers, federal agencies are being required by OMB to shift their IT services to a cloud computing option when feasible. According to NIST, cloud computing is a means “for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.” NIST also states that an application should possess five essential characteristics to be considered cloud computing: on-demand self service, broad network access, resource pooling, rapid elasticity, and measured service. Essentially, cloud computing applications are network-based and scalable on demand. According to OMB, cloud computing offers these benefits to federal agencies: Economical: cloud computing is a pay-as-you-go approach to IT, in which a low initial investment is required to begin, and additional investment is needed only as system use increases. Flexible: IT departments that anticipate fluctuations in user demand no longer need to scramble for additional hardware and software. With cloud computing, they can add or subtract capacity quickly and easily. Fast: cloud computing eliminates long procurement and certification processes, while providing a near-limitless selection of services. According to NIST, cloud computing offers three service models: Infrastructure as a service—the service provider delivers and manages the basic computing infrastructure of servers, software, storage, and network equipment on which a platform (i.e., operating system and programming tools and services) to develop and execute applications can be developed by the consumer. Platform as a service—the service provider delivers and manages the underlying infrastructure (i.e., servers, software, storage, and network equipment), as well as the platform (i.e., operating system, and programming tools and services) on which the consumer can create applications using programming tools supported by the service provider or other sources. Software as a service—the service provider delivers one or more applications and the computational resources and underlying infrastructure to run them for use on demand as a turnkey service. As can be seen in figure 1, each service model offers unique functionality, with consumer control of the environment decreasing from infrastructure to platform to software. NIST has also defined four deployment models for providing cloud services: private, community, public, and hybrid. In a private cloud, the service is set up specifically for one organization, although there may be multiple customers within that organization and the cloud may exist on or off the customer’s premises. In a community cloud, the service is set up for organizations with similar requirements. The cloud may be managed by the organizations or a third party and may exist on or off the organization’s premises. A public cloud is available to the general public and is owned and operated by the service provider. A hybrid cloud is a composite of two or more of the above deployment models (private, community, or public) that are bound together by standardized or proprietary technology that enables data and application portability. According to federal guidance, these deployment models determine the number of consumers and the nature of other consumers’ data that may be present in a cloud environment. A public cloud should not allow a consumer to know or control other consumers of a cloud service provider’s environment. However, a private cloud can allow for ultimate control in selecting who has access to a cloud environment. Community clouds and hybrid clouds allow for a mixed degree of control and knowledge of other consumers. Additionally, the cost for cloud services typically increases as control over other consumers and knowledge of these consumers increase. According to OMB, the federal government needs to shift from building custom computer systems to adopting cloud technologies and shared services, which will improve the government’s operational efficiencies and result in substantial cost savings. To help agencies achieve these benefits, OMB required agencies to immediately shift to a “Cloud First” policy and increase their use of available cloud-based and shared services whenever a secure, reliable, and cost-effective cloud service exists. In order to accelerate the adoption of cloud computing services across the government, in December 2010, OMB made cloud computing an integral part of its 25 Point Implementation Plan to Reform Federal Information Technology Management. The plan specified six major goals: align the acquisition process with the technology cycle, align the budget process with the technology cycle, and apply “light technology” and shared services. strengthen program management, streamline governance and improve accountability, increase engagement with industry, To achieve these goals, the plan outlined 25 action items for agencies, such as completing plans to consolidate 800 data centers by 2015 and developing a governmentwide strategy to hasten the adoption of cloud computing services. To accelerate the shift, OMB required agencies to identify, plan, and fully migrate three services to a cloud-based one by June 2012. In February 2011, OMB issued the Federal Cloud Computing Strategy, as called for in its 25-Point Plan. The strategy provided definitions of cloud computing services; benefits of cloud services, such as accelerating data center consolidations; a decision framework for migrating services to a cloud environment; case studies to support agencies’ migration to cloud computing services; and roles and responsibilities for federal agencies. For example, the strategy states that NIST’s role is to lead and collaborate with federal, state, and local government agency chief information officers, private sector experts, and international bodies to identify standards and guidance and prioritize the adoption of cloud computing services. In a December 2011 memo, OMB established the Federal Risk and Authorization Management Program (FedRAMP), a governmentwide program to provide joint authorizations and continuous security monitoring services for cloud computing services for all federal agencies. Among other things, the memo required the General Services Administration (GSA) to issue a concept of operations, which was completed in February 2012. The concept of operations states that FedRAMP is to: ensure that cloud computing services have adequate information eliminate duplication of effort and reduce risk management costs; and enable rapid and cost-effective procurement of information systems/services for federal agencies. GSA initiated FedRAMP operations, which the agency referred to as initial operational capabilities, in June 2012. We have previously reported on federal agencies’ efforts to implement cloud computing services, and on progress oversight agencies have made to help federal agencies in those efforts. These include: In May 2010, we reported on the efforts of multiple agencies to ensure the security of governmentwide cloud computing services. We noted that, while OMB, GSA, and NIST had initiated efforts to ensure secure cloud computing services, OMB had not yet finished a cloud computing strategy; GSA had begun a procurement for expanding cloud computing services, but had not yet developed specific plans for establishing a shared information security assessment and authorization process; and NIST had not yet issued cloud-specific security guidance. We recommended that OMB establish milestones to complete a strategy for federal cloud computing and ensure it addressed information security challenges. These include having a process to assess vendor compliance with government information security requirements and division of information security responsibilities between the customer and vendor. OMB agreed with our recommendations and subsequently published a strategy in February 2011 that addressed the importance of information security when using cloud computing, but it did not fully address several key challenges confronting agencies, such as the appropriate use of attestation standards for control assessments of cloud computing service providers, and division of information security-related responsibilities between customer and provider. We also recommended that GSA consider security in its procurement for cloud services, including consideration of a shared assessment and authorization process. GSA generally agreed with our recommendations and has since developed its FedRAMP program, an assessment and authorization process for systems shared among federal agencies. Finally, we recommended that NIST issue guidance specific to cloud-based computing security. NIST agreed with our recommendations and has since issued multiple publications that address such guidance. In April 2012, we reported that more needed to be done to implement OMB’s 25-Point Plan and measure its results. Among other things, we reported that, of the 10 key action items that we reviewed, 3 had been completed and 7 had been partially completed by December 2011. In particular, OMB and agencies’ cloud-related efforts only partially addressed requirements. Specifically, agencies’ plans were missing key elements, such as a discussion of needed resources, a migration schedule, and plans for retiring legacy systems. As a result, we recommended, among other things, that the Secretaries of Homeland Security and Veterans Affairs, and the Attorney General direct their respective CIOs to complete elements missing from the agencies’ plans for migrating services to a cloud computing environment. Officials from each of the agencies generally agreed with our recommendations and have taken steps to implement them. In July 2012, we reported that the seven federal agencies we reviewed had made progress in meeting OMB’s requirement to implement three cloud computing services by June 2012. Specifically, the seven agencies had implemented 21 cloud computing services and spent a total of $307 million for cloud computing in fiscal year 2012, about 1 percent of their total IT budgets. However, two agencies reported that they did not have plans to meet OMB’s deadline to implement three services by June 2012, but would do so by calendar year’s end. Agencies also shared seven common challenges that they experienced in moving services to cloud computing. The seven challenges included: Meeting federal security requirements: Cloud service vendors may not be familiar with security requirements that are unique to government agencies, such as continuous monitoring and maintaining an inventory of systems. Obtaining guidance: Existing federal guidance for using cloud services may be insufficient or incomplete. Agencies cited a number of areas where additional guidance is needed, such as purchasing commodity IT and assessing Federal Information Security Management Act security levels. Acquiring knowledge and expertise: Agencies may not have the necessary tools or resources, such as expertise among staff, to implement cloud services. Certifying and accrediting vendors: Agencies may not have a mechanism for certifying that vendors meet standards for security, in part because FedRAMP had not yet been made operational (i.e., reached initial operating capabilities). Ensuring data portability and interoperability: To preserve their ability to change vendors in the future, agencies may attempt to avoid platforms or technologies that “lock” customers into a particular product. Overcoming cultural barriers: Agency culture may act as an obstacle to implementing cloud services. Procuring services on a consumption (on-demand) basis: Because of the on-demand, scalable nature of cloud services, it can be difficult to define specific quantities and costs. These uncertainties make contracting and budgeting difficult due to the fluctuating costs associated with scalable and incremental cloud service procurements. While each of the seven agencies had submitted plans to OMB for implementing their cloud services, a majority of the plans were missing required elements. Agencies have also identified opportunities for future cloud service implementations, such as moving storage and help desk services to a cloud environment. We made recommendations to seven agencies to develop planning information, such as estimated costs and legacy IT systems’ retirement plans for existing and planned services. The agencies generally agreed with our recommendations and have taken actions to implement them. OMB’s Cloud First policy requires federal agencies to implement cloud computing services whenever a secure, reliable, and cost-effective cloud option exists. In July 2012, we found that all of the seven agencies had among other things, identified at least three services to implement in a cloud environment, and all but two had implemented three cloud computing services. Since then, the agencies have added more cloud computing services. In total, the number of cloud computing services implemented increased from 21 to 101, an increase of 80 services. Table 1 lists the number of cloud computing services implemented by each agency in 2012 and 2014. While the number of cloud computing services increased by 80 since 2012, the number implemented by each agency during that time varied. For example, since 2012, HHS had implemented 33 such services, while SBA, State, and Treasury had implemented 3, 11, and 2, respectively. (A brief description of the 101 cloud computing services implemented by each agency is included in app. II.) These agencies also increased the amount they budgeted and reported spending on cloud computing services. Specifically, the seven agencies, collectively, reported their spending increased by $222 million, from $307 million to $529 million. Table 2 shows the amount each agency (1) reported spending on cloud computing services in fiscal year 2012 and (2) planned to spend in fiscal year 2014. Although collectively the amount budgeted in 2014 by the agencies ($529 million) is an overall increase of 72 percent, the amounts and percentages varied significantly across the agencies. For example, USDA increased its planned cloud spending by $71 million (a 394 percent change over 2012), while Treasury budgeted $37 million more (a 22 percent increase over 2012). Further, the agencies also increased the collective percentage of their IT budgets allocated to cloud computing services. Specifically, as shown in table 3, the agencies collectively doubled the percentage of their IT budgets from 1 to 2 percent during the fiscal year 2012 -14 period. However, on an individual agency basis, the percentage increase varied. For example, GSA increased the percentage of its IT budget allocated to cloud computing from 2 to 5 percent while DHS increased its allocation from 1 to 2 percent. Even though the agencies collectively and individually increased the percentage of their IT budgets allocated to cloud services, our analysis showed that the agencies are still devoting a large portion of their IT budgets to non-cloud computing expenditures. Specifically, as shown in table 3, the agencies in 2014 were collectively budgeting 2 percent of their IT budgets to cloud services, while the remaining 98 percent were dedicated to non-cloud expenditures. Officials from OMB and the agencies attributed the agencies’ varying degrees of progress in part, to the following: in implementing its Cloud First policy, OMB has granted the agencies discretion in determining whether and which services are to be migrated. Specifically, officials from OMB’s Office of E-Government & Information Technology told us that they initially had established goals for the agencies but instead granted them the latitude to annually assess all their investments and identify those investments that were appropriate for their agency to migrate. As a result, some agencies have been more aggressive than others in moving services to the cloud. In addition, the agencies’ relatively low percentage of budget allocated to cloud spending is due in large part to the fact that the agencies have not assessed a majority of their investments, although OMB’s guidance calls for agencies to assess all their IT services for migration to the cloud irrespective of where each investment is in its life cycle. Specifically, as shown in table 4, the agencies collectively had not assessed about 67 percent of their 2000 investments. Table 4 provides further detail on the number of investments in fiscal year 2014 that were chosen for cloud computing services; the number of investments that were evaluated for cloud computing services, but an alternative was chosen; and the number not evaluated for cloud computing services. A key reason cited by the agency officials for why most of their investments had not been evaluated for cloud services was that they were largely legacy investments in operations and maintenance; the agencies had only planned to consider cloud options when the investments were to be modernized or replaced at the end of their life cycle. Agency officials added that it was a challenge to assess and ultimately replace such legacy systems because agency personnel were often reluctant to cede direct control of mission-critical IT resources. While we recognize the cultural challenge to moving investments to the cloud (which is discussed in more detail later in this report), OMB guidance nonetheless calls for agencies to continually assess all investments irrespective of where they are in their life cycle. Agency officials were aware of the OMB guidance and said they had plans to assess their unevaluated investments in the near future. Nevertheless, the agencies for the most part were not able to provide us with specific dates for when assessments of these investments were to be performed. Establishing such milestones is an important management tool to ensuring policy outcomes—including those envisioned by OMB’s cloud policy—are achieved in an efficient and effective manner. Until the agencies assess their IT investments that have yet to be evaluated for suitability for migration to the cloud, they will not know which services are likely candidates for migration to cloud computing services, and therefore will not gain the operational efficiencies and cost savings associated with using such services. Agencies reported that they had cost savings from implementing 22 out of 101 cloud computing services through fiscal year 2013. Specifically, they collectively saved about $96 million by implementing these 22 services. Table 5 lists the total number of cloud computing services reported by each agency, the number of cloud services with cost savings, and the total savings. These savings included both one-time savings and life-cycle savings. For example, GSA had a one-time cost savings of $2.6 million by migrating to a cloud customer service solution, which was a less expensive alternative than upgrading its existing system. DHS had a cumulate savings of $1.2 million through fiscal year 2013 for its collaboration platform that is used to build applications and manage documents, which was implemented in fiscal year 2011. According to agency officials, two major factors governed why the remaining services (79) did not achieve savings. First, a motivation for changing to some of the cloud-based services was not to reduce spending, but to improve service. Second, in selected cases, the cloud computing service opened up a new service or provided a higher quality of service; while this provided useful benefits to the agency, the associated costs negated any savings. In addition to these cost savings, agency officials identified the following benefits from migrating systems and services to a cloud computing service. Decreased time to deploy: A cloud computing service can be deployed more quickly and migrating a system to a cloud allows an agency to run it for a short period of time and then shut it down, without having to develop a unique infrastructure for it. Increased flexibility: Cloud computing services are useful for systems that have varying use throughout a year, as the cloud service can be easily scaled up when there is high demand. Reduced IT infrastructure: Implementation of cloud computing services reduces the amount of IT infrastructure required onsite, in particular, data center resources. Officials from the agencies we reviewed cited five challenges they have experienced in implementing cloud computing services. Of the five, two challenges—meeting federal security requirements and overcoming cultural barriers—were previously identified and discussed in our 2012 report. According to the officials, meeting federal security requirements was a continuing challenge because the requirements for new services are a moving target (i.e., the requirements are regularly being updated to address new threats, vulnerabilities, and technologies, and vendors may not be able to meet them). For example, NIST recently made revisions to its cloud security requirements and agencies are still in the process of getting familiar with them. As a result, agencies fear making mistakes that could negatively impact the security of systems and data. With regard to overcoming cultural barriers, DHS officials said that shifting to a new business model from a legacy business model requires cultural change, which continues to be a challenge at the department. In addition, GSA officials said that, as they move the management of servers and software off site, a continuing challenge is getting agency staff to adapt to an operational environment where they do not have direct control and access to agency IT resources. In addition to the two challenges repeated from 2012, officials reported these three new challenges. Meeting new network infrastructure requirements. Current network infrastructure, topology (network configuration), or bandwidth (data transmission rate) is often insufficient to meet new infrastructure needs when agencies transition to cloud computing services. For example, officials at State said that legacy systems with a particular infrastructure designed to meet certain federal requirements will need to be reengineered to work in a multi-tiered cloud environment. USDA officials stated that they would need to consider redesigning their network topology to accommodate new cloud service bandwidth requirements and traffic streams. Having appropriate expertise for acquisition processes. Migrating legacy systems to cloud computing services requires knowledgeable acquisition staff and appropriate processes. For example, HHS officials stated that while the department has the capability to purchase cloud services, it has found post award management to be a challenge. These officials added that to respond to this challenge, HHS is working with its personnel as well as other stakeholders, such as GSA, to develop best practices for cloud post award management and related acquisition activities. In addition, DHS officials said that efforts to transition from legacy systems to cloud computing services require streamlining their IT services supply chain, which requires an evaluation of the component processes and time to fully implement this transformation. Funding for implementation. Funding for the initial implementation of a cloud service can be a significant cost to agencies. For example, officials at State said that the cost of migrating an application to a cloud service poses a challenge in the current budget environment where IT budgets are declining. In addition, GSA officials stated that initial implementation requires additional funding that has not been made available. While these challenges are formidable, OMB and GSA have provided guidance and services to help agencies address many of these challenges. For example, GSA developed FedRAMP, which is a program to create processes for security authorizations and allow agencies to leverage security authorizations on a governmentwide basis in an effort to streamline the certification and accreditation process. GSA also provides continuous monitoring services for cloud computing services for all federal agencies. In addition, OMB’s Federal Cloud Computing Strategy addresses how agencies can overcome redesign and implementation challenges. In particular, the strategy states that agencies should ensure that their network infrastructure can support the demand for higher bandwidth before migrating to a cloud service. The strategy also directs agencies to assess readiness for migration to a cloud service by determining the suitability of the existing legacy application and data to either migrate to the cloud service (i.e., rehost an application in a cloud environment) or be replaced by a cloud service (i.e., retire the legacy system and replace with a commercial equivalent). Further, GSA provides services to assist agencies with procuring and acquiring cloud services. Specifically, GSA established contracts that the agencies can use to obtain commodity services such as cloud infrastructure as a service and cloud e-mail; these contracts—which were established in October 2010 and September 2012, respectively—are intended to reduce the burden on agencies for the most common IT services. GSA also created working groups to support commodity service migration by developing technical requirements for shared services to reduce the analytical burden on individual government agencies. Regarding funding, the strategy recommends that agencies reevaluate their technology sourcing strategy to include consideration and application of cloud computing services as part of their budget process. Officials stated that they consult the Federal Cloud Computing Strategy to guide them in their efforts to move services to the cloud. For example, SBA officials said that they consult this guidance as they plan for and prepare to move services to the cloud. In addition, State officials said that they use the strategy as part of their process to prepare them in their efforts to consider cloud services. Since we last reported on these seven agencies, they have made varying degrees of progress in implementing cloud computing services, and in doing so, have saved money and realized other benefits. While the collective and individual agency gains in implementing such services are commendable, the seven agencies are still only investing a small fraction of their IT budgets on cloud computing. The agencies’ modest level of cloud investment is attributable in part to the large number of legacy investments—nearly two thirds of all investments—that have yet to be considered for cloud migration. This is due in part to the agencies’ practice of not assessing these investments until they are to be replaced or modernized, which is inconsistent with OMB’s direction. Nonetheless, the large number of agency investments to be assessed provides ample opportunities for additional progress and substantial cost savings. An important step to realizing this progress and savings is ensuring these investments are assessed, which includes establishing milestones for when the assessments are to be performed. Until this is done and the investments are assessed, the agencies cannot know whether they are achieving the maximum benefits, including improved operational efficiencies and minimized costs, associated with using such services. Agencies continue to face formidable challenges as they move their IT services to the cloud. Two of the challenges—namely ensuring IT security and overcoming agency culture—have persisted since we last reported. OMB and GSA have issued guidance and established initiatives to address the challenges, which agencies can use to help mitigate any associated negative impacts. To help ensure continued progress in the implementation of cloud computing services, we recommend that the Secretaries of Agriculture, Health and Human Services, Homeland Security, State, and the Treasury; and the Administrators of the General Services Administration and Small Business Administration direct their respective Chief Information Officers to take the following actions: Ensure that all IT investments are assessed for suitability for migration to a cloud computing service. As part of this, establish evaluation dates for those investments identified in this report that have not been assessed for migration to the cloud. In commenting on a draft of this report, six agencies—DHS, GSA, HHS, SBA, State, and USDA—agreed with our recommendations, and one agency (Treasury) had no comments. The specific comments from each agency are as follows: DHS, in its written comments—which are reprinted in appendix III— stated that it concurred with our recommendations. The department also provided technical comments, which we have incorporated in the report as appropriate. In its written comments, GSA stated it agreed with our findings and recommendations and will take appropriate action. GSA’s comments are reprinted in appendix IV. HHS, in comments provided via e-mail from its Audit Liaison within the Office of the Assistant Secretary for Legislation, stated it concurred with our recommendations. The department also provided technical comments, which we have incorporated in the report as appropriate. SBA, in comments provided via e-mail from its Program Manager within the Office of Congressional and Legislative Affairs, stated that it concurred with our report. It also commented that of the 29 investments that SBA did not evaluate for cloud computing (identified in table 4), only 17 could be evaluated for cloud alternatives. SBA said the other 12 investments cannot be considered for a cloud alternative, but provided no documentation to support this statement. USDA, in comments provided via an e-mail from its GAO Agency Liaison within the Office of the Chief Information Officer, stated that it agreed with our recommendations and is committed to implementing OMB’s Cloud First policy. State, in its written comments (which are reprinted in appendix V), noted that it had already addressed our recommendations. Specifically, the department said it developed and communicated guidance to its IT investment owners on how to implement OMB’s Cloud First policy and that all investments are currently undergoing cloud computing alternatives analyses with the goal of having these assessments completed by the end of calendar year 2014. State also provided technical comments, which we have incorporated in the report as appropriate. Treasury, in its written comments, stated that the department had no comments on the report and appreciated our efforts in developing it. Treasury’s written comments are reprinted in appendix VI. We are sending copies of this report to interested congressional committees; the Secretaries of Agriculture, Health and Human Services, Homeland Security, State, and the Treasury; the Administrator of the General Services Administration and Small Business Administration; the Director of the Office of Management and Budget: and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Our engagement objectives were to (1) assess selected agencies’ progress in using cloud computing services, (2) determine the extent to which selected agencies have experienced cost savings when such services have been deployed, and (3) identify any challenges selected agencies are facing as they use cloud computing. To address our first objective, we selected the same seven agencies that were selected for our 2012 review so that we could compare the progress they have made. The seven agencies selected were the Departments of Agriculture (USDA), Health and Human Services (HHS), Homeland Security (DHS), State, and the Treasury (Treasury); and the General Services Administration (GSA) and Small Business Administration (SBA). We analyzed budget and related documentation from the selected agencies, including data on the number of and funding for current cloud computing services and compared this information with our previous findings. In addition, we interviewed officials responsible for cloud services to corroborate progress. Further, we reviewed agency data on the extent to which they assessed new and ongoing investments for potential cloud computing services. We also interviewed OMB officials to understand cloud computing guidance for federal agencies. Based on the procedures described, we concluded that the data presented are sufficiently reliable for our purposes. To address our second objective, we analyzed agency data on cost savings and avoidances through fiscal year 2013 for those cloud services the agencies had implemented. We also interviewed agency officials to obtain information on other benefits the agencies had gained from adopting cloud-based services. To determine the reliability of the data on cost savings and avoidances, we analyzed agency documentation and interviewed appropriate officials to corroborate the cost savings and cost avoidances. We determined that the data were sufficiently reliable for the purpose of this report, which was to identify the extent to which agencies had experienced cost savings for implemented cloud services. To address the third objective, we interviewed officials from each of the selected agencies and asked them to identify challenges associated with their implementation of cloud services. We then assessed and categorized the challenges and totaled the number of times each challenge was cited by agency officials. In order to identify the common challenges, we generalized challenges that were mentioned by two or more agencies. We also compared these challenges with the challenges that agencies reported in our 2012 review. In addition, we conducted a content analysis of the information we received in order to identify and categorize common challenges. To do so, three team analysts independently reviewed and drafted a series of challenge statements based on each agency’s records. They then worked together to resolve any discrepancies, choosing to report on challenges that were identified by two or more agencies. These common challenges were presented in the report. We also interviewed agency officials to corroborate the challenges identified. We conducted this performance audit from December 2013 through August 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Brief Description of Cloud Computing Services Implemented by Seven Agencies Reviewed (as of July 2014) This appendix lists the cloud computing services implemented the seven agencies we reviewed; namely, the Departments of Agriculture (USDA), Health and Human Services (HHS), Homeland Security (DHS), State, and the Treasury (Treasury); and the General Services Administration (GSA) and Small Business Administration (SBA). It also includes as reported by the agencies, a description of each service to be provided, the service model, deployment model, and whether the cloud service is approved/compliant with the Federal Risk and Authorization Management Program (FedRAMP). Cloud computing service E-mail as a Service Provide the infrastructure for the Description of service to be provided department’s enterprisewide e-mail and calendar functions. Enable DHS users to have virtual access to department desktop operating systems and applications anywhere in the world. Support DHS’s secure information technology (IT) development, test, and preproduction environment that mirrors the production environment. It also is to provide access to a suite of application life-cycle management and automated testing services. Ensure that DHS’s public-facing websites are always available. DHS adopted this service to protect against denial of service attacks, help manage surge requirements, and reduce hosting costs. Provide a secure platform that is to be used by DHS to build applications and manage documents. It also is to provide file management, user collaboration, and workflow routing capabilities. Enable DHS with rapid provisioning in a secure virtual operating environment and furnish hosting for applications and services, including operating systems, network, and storage. Provide components with access to a website that allows for the consolidation of projects. Description of service to be provided Allow department users to manage customer relationships and enable the user to make informed decisions and followup. FedRAMP approved/compliant? Provide DHS with decision-making information and promotes virtual consolidation by integrating business intelligence dashboards. Support all of the department’s public websites and provides software accessible through the public cloud. Provide individuals in the United States with a capability to check their employment eligibility status before formally seeking employment. FedRAMP is a governmentwide program initiated by the Office of Management and Budget to provide joint authorizations and continuous security monitoring services for cloud computing services. FedRAMP is intended to (1) ensure that cloud computing services have adequate information security; (2) eliminate duplication of effort and reduce risk management costs; and (3) enable rapid and cost-effective procurement of information systems/services for federal agencies. DHS reported this service as not being FedRAMP approved/compliant and said it did not need to be so because the service required a higher level of security—which DHS referred to as a Federal Information Security Management Act high baseline—than that provided by FedRAMP. Description of service to be provided Provide a tool to be used for budget planning and formulation, and performance reporting. The tool also is to provide the means to generate reports to respond to ad hoc queries from Congress and other stakeholders about the use of foreign assistance resources. FedRAMP approved/compliant? Allow program managers of the Nonproliferation and Disarmament Fund access to agency data from any staff work location. Provide domestic and overseas agency staff with direct access to information in over 50 databases. The service is also to provide additional functionality including an integrated electronic catalog with other online libraries. Support a collaboration and knowledge management environment for the department which includes hosting for servers and applications, and an environment for application development and product testing. Provide a grants management system for the department which supports the full life cycle of the federal assistance process. Enable the hosting of public web sites for the Department of State, including its embassies. Provide a web-based travel management service for the department. Enable unclassified documents to be available worldwide regarding the history of the department, diplomacy, and foreign relations. Provide application services for the department, including application integration. Support the department's centralized electronic forms program. Provide support for a key departmental financial management system’s (the Joint Financial Management System) continuity of operations component. Description of service to be provided Support the Executive Secretariat's continuity of operations planning. Provide departmental personnel with the capability to remotely access e-mail worldwide. Provide the department's Bureau of Diplomatic Security with an enterprise service operations center. Description of service to be provided Provide financial management services to the department via a web-enabled, integrated, accounting, budgeting, procurement, and reporting commercial off- the-shelf system. Enable web-based services to internal and external users, such as taxpayers, Internal Revenue Service employees, and other government agencies, as part of administering the federal tax code. Support an integrated system for managing the Bureau of Engraving and Printing's day- to-day business and manufacturing operations. Support the department’s internal and external web sites. If offers a full suite of web solutions including hosting, design, development, and deployment to web sites. Enable the Bureau of Engraving and Printing to manage bureau assets during their entire life cycle, from acquisition through decommissioning. yes risk management costs; and (3) enable rapid and cost-effective procurement of information systems/services for federal agencies. Description of service to be provided Service model Provide a cloud-based geospatial service that supports data and the geographic information system. It creates web map/feature services, asset exchange, versioning, templates, and cost controls for external publication of products and services. Provide an internal cloud-based geospatial shared service that supports data and the geographic information system. It improves cost controls by modernizing and/or creating functions of discovery and reuse, uniform web map/feature services, asset exchange, versioning, templates, group workflows and collaboration, and enables an integrated development environment. Maintain a government-owned, web- based application that is designed to help agencies in the management and control of their initiatives, portfolios, and investment priorities, as well as in the preparation and submission of budget data to OMB. Support an electronic training system for the department that provides online administration of curriculum by trainers, individualized training support, on- demand classroom registration, customized content, collaborative tools, and integrated back-end systems. Allow the department to track, process, and redact materials requested under the Freedom of Information Act. Microsoft Office 365 Provide IT communication services used by USDA organizations. It is to provide integrated communications tools for USDA’s approximately 120,000 employees and business partners. Provide customers with platform services to support the development and transition of business applications into standardized enterprise data center service offerings. Description of service to be provided Service model Provide operating platforms to securely host customer applications. As part of this, the National IT Center is to utilize advanced server virtualization technologies, strict standards, and economies of scale to enable rapid delivery of cost-effective, fully-managed operating platforms with expanded inheritable security controls. FedRAMP approved/compliant? Support a single system for tracking and managing correspondence across USDA, including Secretarial and agency correspondence. It is to include modern Customer Relationship Management features and mobile device access. Provide delivery benefits for all content types. It also is to provide increased availability and other performance benefits as well as a scalable on-demand network. FedRAMP is a governmentwide program initiated by the Office of Management and Budget to provide joint authorizations and continuous security monitoring services for cloud computing services. FedRAMP is intended to (1) ensure that cloud computing services have adequate information security; (2) eliminate duplication of effort and reduce risk management costs; and (3) enable rapid and cost-effective procurement of information systems/services for federal agencies. In addition to the contact name above, individuals making contributions to this report included Gary Mountjoy (assistant director), Gerard Aflague, Scott Borre, Nancy Glover, and Lori Martinez. | Cloud computing is a relatively new process for acquiring and delivering computing services via information technology (IT) networks. Specifically, it is a means for enabling on-demand access to shared and scalable pools of computing resources with the goal of minimizing management effort and service provider interaction. To encourage federal agencies to pursue the potential efficiencies associated with cloud computing, the Office of Management and Budget (OMB) issued a “Cloud First” policy in 2011 that required agency Chief Information Officers to implement a cloud-based service whenever there was a secure, reliable, and cost-effective option. GAO was asked to assess agencies' progress in implementing cloud services. GAO's objectives included assessing selected agencies' progress in using such services and determining the extent to which the agencies have experienced cost savings. GAO selected for review the seven agencies that it reported on in 2012 in order to compare their progress since then in implementing cloud services; the agencies were selected using the size of their IT budgets and experience in using cloud services. GAO also analyzed agency cost savings and related documentation and interviewed agency and OMB officials. Each of the seven agencies reviewed implemented additional cloud computing services since GAO last reported on their progress in 2012. For example, since then, the total number of cloud computing services implemented by the agencies increased by 80 services, from 21 to 101. The agencies also added to the amount they reported spending on cloud services by $222 million, from $307 million to $529 million. Further, the agencies increased the percentage of their information technology (IT) budgets allocated to cloud services; however, as shown in the table, the overall increase was just 1 percent. The agencies' relatively small increase in cloud spending as a percent of their overall IT budgets, is attributed in part, to the fact that these agencies collectively had not considered cloud computing services for about 67 percent of their investments. With regard to why these investments had not been assessed, the agencies said it was in large part due to these being legacy investments in operations and maintenance; the agencies had only planned to consider cloud options for these investments when they were to be modernized or replaced. This is inconsistent with Office of Management and Budget policy that calls for cloud solutions to be considered first whenever a secure, reliable, and cost-effective option exists regardless of where the investment is in its life cycle. Until the agencies fully assess all their IT investments, they will not be able to achieve the resulting benefits of operational efficiencies and cost savings. The agencies collectively reported cost savings of about $96 million from the implementation of 22 of the 101 cloud services. These savings included both one-time and multiyear savings. For example, the General Services Administration saved $2.6 million by migrating to a cloud customer service solution, and Homeland Security saved $1.2 million from fiscal years 2011 through 2013 by implementing a cloud-based collaboration service. Agency officials cited two major reasons for why the other services they had implemented did not save money. First, a motivation for changing to some of the cloud-based services was not to reduce spending, but to improve service. Second, in selected cases, the cloud computing service opened up a new service or provided a higher quality of service; while this provided useful benefits to the agency, the associated costs negated any savings. GAO is recommending, among other things, that the seven agencies assess the IT investments identified in this report that have yet to be evaluated for suitability for cloud computing services. Of the seven agencies, six agreed with GAO's recommendations, and one had no comments. |
Since 2012, the Coast Guard has been legislatively required to submit a CIP annually to certain Congressional committees, alongside its budget proposal, that includes, among other things, projected funding for capital assets in such areas as acquisition, construction, and improvements needed for the upcoming 5 fiscal years. Specifically, this 5-year CIP is intended to provide insight into the proposed budget for the upcoming fiscal year and the following 4 years. The 5-year CIP reports the assets’ cost and schedule per the acquisition program baseline; however, we found that it does not consistently reflect current total cost estimates or the effects of tradeoffs that are made as part of the annual budget cycle. For example, in 2014 we reported that in the Fiscal Year 2014 CIP, the Coast Guard proposed decreasing the number of Fast Response Cutters procured per year to two, as opposed to three to six as previously planned for, without altering the total cost estimate in the CIP. Figure 1 highlights the differences using historical estimates depicted in the Coast Guard’s Fiscal Year 2013 CIP, which projects acquisition funding from fiscal years 2013 through 2017, as compared to its requested and appropriated funds during this same time period. Moreover, the 5-year CIP does not prioritize acquisition programs in its out year projections which, in part, has led to the Coast Guard’s acquisition funding projections frequently exceeding both the requested and appropriated funding amounts. Furthermore, this document does not display tradeoffs or priorities and limits the Coast Guard’s ability to manage affordability of its acquisition portfolio, including accurately forecasting its total cost projections. Furthering the affordability concern, the Offshore Patrol Cutter procurement, for which planned acquisition costs are estimated at $12.1 billion through final delivery in 2034—making it the most expensive Coast Guard acquisition program in its recapitalization effort—will create additional strain on the Coast Guard’s acquisition budget. According to the Commandant of the Coast Guard, the Offshore Patrol Cutter is its top priority. As such, the Coast Guard will prioritize its budget requests for the Offshore Patrol Cutter before other assets potentially limiting funds requested for other acquisition programs. Figure 2 provides the Coast Guard’s acquisition funding projections from its fiscal year 2017 CIP, for fiscal years 2017 through 2021. As depicted in figure 2, for fiscal years 2017 through 2021, the Coast Guard’s projected acquisition funding levels for its major programs exceeds its average budget request of roughly $1.1 billion from 2013 to 2017. Beginning around 2019, these projected acquisition funding levels exceed the average appropriated funding amount of roughly $1.3 billion that the Coast Guard has received from 2013 to 2017, and which is greater than the Coast Guard’s average annual requests. This disconnect highlights that the 5-year CIP does not account for the reality of the constrained budget environment the Coast Guard faces. From our analysis of this CIP, we concluded that in order for the Coast Guard to acquire many of its needed assets over the next 5 years, it will need significantly more appropriated funds than what the Coast Guard typically requests. Beginning in September 2018, the Offshore Patrol Cutter will absorb roughly one half to about two-thirds of the Coast Guard’s annual acquisition funding requests until 2032 if historic funding request levels over the past 4 years continue to remain about the same. Any remaining Coast Guard acquisition programs will have to compete for acquisition funds not requested for the Offshore Patrol Cutter. For instance, the Coast Guard must also recapitalize other assets such as the polar icebreakers—to alleviate an expected capability gap—and refurbish other legacy vessels, such as its fleet of river buoy tenders, as these assets continue to age beyond their expected service lives and, in some cases, have been removed from service without a replacement. Over the last year, in public hearings before Congress, senior Coast Guard officials have stated a need for over $2 billion per year for acquisitions. However, in the President’s Budget, the Coast Guard requested $1.1 billion for fiscal year 2017 and $1.2 billion for fiscal year 2018. As we previously reported, in an effort to address the funding constraints it has faced annually, the Coast Guard has been in a reactive mode, delaying and reducing its capabilities through the annual budget process by moving planned acquisitions into future years, and does not have a plan to realistically set forth affordable priorities. The Coast Guard currently has no method in place to capture the effects of these deferred acquisitions on its future portfolio, which will result in significant capability gaps if funding does not materialize and a “bow wave” of near- term unfunded requirements will be created, negatively affecting future acquisition efforts. In 2014, we recommended that the Coast Guard develop a 20-year fleet modernization plan that would identify all acquisitions necessary for maintaining at least its current level of service and the fiscal resources necessary to build these assets. DHS concurred with this recommendation and the Coast Guard is in the process of developing this document to guide and manage the affordability of its acquisition portfolio. Such an analysis would facilitate a full understanding of the affordability challenges facing the Coast Guard while it builds the Offshore Patrol Cutter, among other major acquisitions. Coast Guard officials report an ongoing effort to produce a 20-year plan—which the Coast Guard refers to as a 20-year CIP—but has not articulated a timeframe for when this plan will be completed or what information it will include. As we stated in our 2014 report, in line with the Office of Management and Budget’s capital planning guidance referenced by the Coast Guard’s Major Systems Acquisition Manual, we would expect the 20-year CIP to include, among other things: an analysis of the portfolio of assets already owned by the agency the performance gap and capability necessary to bridge the old and new assets, and a justification for new acquisitions proposed for funding. As we have noted in our past work, a long-term plan that also includes acquisition implications, such as sustainment costs, and support infrastructure and personnel needs, would enable tradeoffs to be identified and addressed in advance, leading to better informed choices and making debate possible before irreversible commitments are made to individual programs. Without this type of plan, decision makers do not have the information they need to better understand and address the Coast Guard’s long-term outlook. The Coast Guard initiated the acquisition of a new fleet of heavy polar icebreakers in 2013, but now faces potential schedule and cost risks in implementing an accelerated acquisition approach. In June 2016, we reported that the Coast Guard’s heavy icebreaking fleet had been operating at a reduced capacity after one of its ships, the Polar Sea, suffered a catastrophic engine failure in 2010, rendering it inactive. As a result, the Coast Guard reports that it has not been able to provide year- round access to both the Arctic and Antarctic regions. Specifically, from 2010 to 2013, the Coast Guard was unable to fulfill the National Science Foundation’s request for the annual resupply of its McMurdo Station research center in Antarctica as both of its heavy polar icebreakers were inactive due to maintenance needs. The Coast Guard resumed this annual mission in 2014 following the reactivation of its other heavy icebreaker, the Polar Star, which is shown in figure 3. In order to provide continued access to the Arctic and Antarctic regions, the Coast Guard initiated a program in 2013 to acquire a fleet of three new heavy polar icebreakers. The Coast Guard is currently planning for the first new heavy polar icebreaker to be delivered in fiscal year 2023, which has been accelerated from a previous estimate of 2026. The accelerated schedule was implemented at the direction of the last Administration, and confirmed by the current Administration. To meet its goal of delivering the first icebreaker in fiscal year 2023, the Coast Guard has partnered with the Navy to leverage the Navy’s shipbuilding expertise. These agencies established an integrated program office, which was formalized in January 2017, to collaborate on developing and implementing an acquisition approach. The Coast Guard has made progress in advancing through the acquisition process for the new heavy polar icebreaker by completing certain efforts, such as establishing requirements and engaging the shipbuilding industry, but the accelerated schedule it is pursuing poses potential risk. Specifically, there is a risk that the acquisition planning documents required to receive DHS approval to begin development efforts—and which are necessary under DHS acquisition policy for the anticipated contract award in fiscal year 2019—might not be completed on schedule. The Coast Guard acknowledged this in its 2017 annual program review and stated that should the acquisition planning documents not be completed and approved by the end of fiscal year 2017, the program may be unable to meet its schedule for entering the obtain phase in early fiscal year 2018. Should this happen, officials reported they may be unable to release the request for proposals for detailed design and construction—a key step in the acquisition process— as scheduled in mid-fiscal year 2018, which could delay the contract award scheduled in fiscal year 2019 and extend the proposed delivery date. Further, the Navy and Coast Guard have established a preliminary cost estimate of $1.15 billion for the lead heavy polar icebreaker, though they are working to reduce this estimate. For example, Coast Guard officials stated that they have identified $97 million in potential savings, which is based partially on reduced power requirements, since modern icebreaker designs are more efficient than the Coast Guard’s existing heavy icebreaker. To meet its accelerated schedule, the program will need to be fully funded in fiscal year 2019. In fiscal year 2017, Congress appropriated a total of $150 million to the Navy for the polar icebreaker’s advanced procurement and the explanatory statement of the DHS Appropriations Act, 2017 reflected $25 million for the Coast Guard acquisition of a polar icebreaker. Another potential challenge is that the Coast Guard may be executing the polar icebreaker acquisition with Navy funding. For example, $150 million in polar icebreaker funding was provided to the Navy. While this approach alleviates some of the affordability issues within the Coast Guard’s budget, it is unclear exactly what roles the Navy and Coast Guard will have if this funding arrangement continues. For instance, if the Navy receives the funding then it would be responsible for contracting for the icebreakers, but the program would follow DHS’s acquisition guidance. This would be an unusual relationship and it is unclear how potential conflicts would be resolved. This is an issue we will pursue in our ongoing work on the acquisition of the polar icebreaker. As noted, the Coast Guard currently has only one operational heavy icebreaker, the Polar Star. We reported in June 2016 that, following its reactivation in 2013, the Polar Star’s end of service life is projected to be between fiscal years 2020 and 2023. As the new heavy polar icebreaker is not expected to be delivered until at least 2023, there could be a gap in the Coast Guard’s heavy icebreaking capability. To ensure that the Coast Guard retains a heavy icebreaking capability until a new heavy icebreaker is operational, the Coast Guard completed a study in January 2017 to determine the cost of reactivating Polar Sea and extending the life of the Polar Star for 7 to 10 years as potential “bridging” strategies. Table 1 shows the results of the study, reported in January 2017. The Coast Guard is not currently planning to pursue any of these four options identified in the January 2017 study as they were deemed too expensive, among other reasons. Instead, Coast Guard officials stated they are planning to conduct a limited service life extension of the Polar Star to address key components and keep it operational until fiscal year 2025, when a second new heavy polar icebreaker is expected to be delivered. According to officials, the Coast Guard is currently conducting an assessment of the Polar Star to determine what systems would need to be overhauled and replaced to meet this goal. An official cost estimate for this effort has not been completed yet, but the fiscal year 2017 CIP includes a total of roughly $75 million towards this effort in fiscal years 2019 through 2021. However, the $75 million estimate may be unrealistic based on the assumptions the Coast Guard used, such as continuing to use parts from the Polar Sea as has been done in previous maintenance events. As a result of the finite parts available from the Polar Sea, the Coast Guard may have to acquire new parts for the Polar Star that could increase the $75 million estimate. In conclusion, as the Coast Guard continues its recapitalization effort, it is important that it plans for the affordability of its future portfolio so that it can minimize the capability gaps that can occur when legacy assets reach the end of their service lives before new assets become operational. We have made several recommendations in recent years intended to help the Coast Guard plan for these future acquisitions and the difficult tradeoff decisions that it will likely face. If the Coast Guard fully implements these recommendations, it could provide decision makers with critical knowledge needed to prioritize its constrained acquisition funding. Without these efforts, the Coast Guard will continue, as it has in recent years, to plan its future acquisitions through the annual budgeting process, a process that has led to delayed and reduced capabilities. A thorough plan regarding the affordability of its future acquisitions would provide timely information to decision makers on how to spend scarce taxpayer dollars in support of a modern, capable Coast Guard fleet. Chairman Hunter, Ranking Member Garamendi, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions. If you or your staff have any questions about this statement, please contact Marie A. Mak, (202) 512-4841 or makm@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Jennifer Grover, Director; Richard A. Cederholm, Assistant Director; Dawn Hoff, Assistant Director; Peter W. Anderson; Jason Berman; Erin Butkowski; John Crawford; Laurier Fish; Camille Henley; Hugh Paquette; and Roxanna T. Sun. GAO has made several recommendations in recent years related to the Coast Guard’s efforts to conduct long-term planning. Table 2 contains a selected list of the recommendations, whether DHS or the Coast Guard concurred or not, and the status of its implementation. | In order to meet its missions of maritime safety, security, and environmental stewardship, the Coast Guard, a component within the Department of Homeland Security (DHS), employs a variety of assets, several of which are approaching the end of their intended service lives. As part of its efforts to modernize its surface and air assets (known as recapitalization), the Coast Guard has begun acquiring new vessels and air assets. Concerns surrounding the affordability of this effort remain as the Coast Guard continues to pursue new acquisitions such as the polar icebreaker, while also acquiring the Offshore Patrol Cutter—which is estimated to cost $12.1 billion through 2032. This statement addresses the (1) extent that the Coast Guard develops planning tools to guide its acquisition portfolio, and (2) potential risks the Coast Guard faces in its polar icebreaker acquisition. This statement is based on GAO's extensive body of published and ongoing work examining the Coast Guard's acquisition efforts over several years. In June 2014, GAO found that the Coast Guard lacked long-term planning to guide the affordability of its acquisition portfolio and recommended the development of a 20-year fleet modernization plan to identify all acquisitions necessary for maintaining at least its current level of service and the fiscal resources necessary to build and modernize its planned surface and aviation assets. Coast Guard officials stated that they are developing a 20-year Capital Investment Plan (CIP), but the timeframe for completion is unknown. The Coast Guard does, however, submit a 5-year CIP annually to Congress that projects acquisition funding needs for the upcoming 5 years. GAO found the CIPs do not match budget realities in that tradeoffs are not included. In the 20-year CIP, GAO would expect to see all acquisitions needed to maintain current service levels and the fiscal resources to build the identified assets as well as tradeoffs in light of funding constraints. As GAO reported in June 2016, the Coast Guard's heavy icebreaker fleet was operating at a reduced capacity with only one heavy polar icebreaker in service, resulting in limited access to both the Arctic and Antarctic regions year-round. The Coast Guard's only active heavy icebreaker, the Polar Star , is approaching the end of its expected service life, and the Coast Guard plans to implement a limited service life extension to keep it operational until the new icebreaker is available. An official cost estimate has not been completed, but the Coast Guard estimates this extension will cost roughly $75 million. Consequently, the Coast Guard expedited its acquisition of new heavy icebreakers with delivery of the first polar icebreaker scheduled in 2023. This delivery schedule poses potential risk as the required acquisition documents may not be completed in time to award the contract in 2019, as currently scheduled. Further, in order to meet this accelerated schedule, the first polar icebreaker would need to be fully funded in fiscal year 2019 with a preliminary cost estimate of $1.15 billion, alongside the Offshore Patrol Cutter acquisition. The Coast Guard has not articulated how it will prioritize its acquisition needs given its Offshore Patrol Cutter is expected to absorb half to two-thirds of its annual acquisition funding requests—based on recent funding history—starting in 2018. GAO is not making recommendations in this statement but has made them to the Coast Guard and DHS in the past regarding recapitalization, including that the Coast Guard develop a 20-year fleet modernization plan that identifies all acquisitions and the fiscal resources needed to acquire them. DHS agreed with this recommendation. |
Intergovernmental grants are a significant part of both federal and state budgets. From the first annual cash grant under the Hatch Act of 1887, the number of grant programs rose to more than 600 in 1995 with outlays of $225 billion, or about 15 percent of total federal spending. Most federal grant programs are small and serve narrow purposes, while a few large programs—such as Medicaid and the Highway Planning and Construction Program—dominate the grant-in-aid system. Of the 633 grants we reviewed, 87 programs—or 14 percent—accounted for 95 percent of total grant funding. In 1995, federal grants accounted for about 23 percent of total state spending. Here too there is variation in the federal share of state spending across categories used by the Census Bureau. Grants accounted for about 60 percent of public welfare and 64 percent of housing and community development spending. The federal share was much smaller in other categories, about 8 percent overall. In theory, grants are to serve purposes beyond returning resources to taxpayers in the form of state services. Grants also can serve as a tool to encourage states to spend federal funds for nationally important activities for which they otherwise would have spent less. The amount of additional spending is affected by the degree to which federal grant funds actually supplement state funds. Public finance literature uses the term substitution to characterize situations in which states use federal grant dollars to reduce their own spending for the aided program either initially or over time. To illustrate how substitution works, if states use federal funds to replace state spending on a dollar-for-dollar basis, then federally aided state services would remain at pre-grant levels—in which case the fiscal impact of the additional federal dollar on the intended program is zero. In practice, substitution effects are not this extreme; total state spending rises upon receiving federal grant funds—but by less than the full amount of the grant because states reduce their own spending for the area. In effect, substitution allows a portion of federal grant funds to be spent on other state priorities. Figure 1 illustrates how substitution would work for a hypothetical state spending $5 on an activity and receiving $1 in federal grant funds for that activity. As previously noted, federal grant dollars are rarely used dollar-for-dollar to supplement state spending on aided activities. However, we show this case in the top part of the figure as a contrast to the expenditure level that might result when substitution occurs. We show substitution at 60 cents to correspond to the approximate midpoint of the range of estimates we reviewed. The figure indicates that with substitution, although the federal grant dollar is spent on the aided program, the state can reduce its own spending by about 60 cents so that total spending increases by 40 cents. The state can then reallocate the 60 cents that has been freed up. In this regard, the figure shows the two options cited by economists—spending on other state priorities or tax relief. For example, states could spend their freed up funds for other public goods they value, such as education, transportation, or corrections. Or, states could reduce or maintain existing tax rates or slow the rate of increase. As noted, there are a variety of approaches available to distribute grants among the states. Public finance experts suggest that grants can be targeted to states with relatively greater programmatic needs and fewer fiscal resources. In program areas where states share financing responsibilities with the federal government, service levels depend to an important extent on state fiscal capacities. For example, after adjusting for differences in state service costs, the fiscal burden on Mississippi of providing a given level of public services is greater than the burden on Connecticut because Mississippi has about three-fourths the tax base of Connecticut. Where the federal government has sought a minimum or more comparable level of a service for all potential beneficiaries— regardless of where they live—grants can help reduce disparities between the capacities of wealthier and poorer communities to provide that service. Our past studies of individual grant programs have led us to conclude that grants can be designed to reduce differences between states’ fiscal resources and programmatic needs by designing formulas that allocate funds according to measures of states’ program needs, fiscal capacities, and service costs. In those studies, we also commented on the importance of using data that accurately capture differences in these factors across states. The objectives examined in this report are those most often put forward by public finance experts: (1) encouraging states to spend more for public goods that appear underfunded from a national perspective, and (2) offsetting the differences between states’ programmatic needs in federally aided functions and their fiscal resources. Grants have played other roles in intergovernmental relations as well. For example, grants have been used to provide states with (1) funding to offset the costs of meeting federal regulatory standards or administering federal regulatory programs, (2) counter-cyclical assistance in times of economic downturns, (3) general purpose fiscal assistance (e.g., general revenue sharing), and (4) performance incentives to improve or enhance existing programs. Appendix I contains a more detailed discussion of the various roles grants have played. To examine substitution in the grant system, we synthesized the body of econometric literature which statistically isolated the fiscal impact of federal grant funds and estimated their impact on total spending. We used this approach because conventional auditing methods were not sufficient to answer the questions about substitution. Such methods do not control for state spending that would have occurred without a federal grant and cannot sort out the effects of other factors, such as population and state income growth, that also influence state spending. Thus, an audit of a federal grant program might demonstrate that all federal funds were spent on the authorized activities. However, because the audit could not observe the level of state spending that would have occurred without the grant, it could not detect substitution in the form of reductions from that unobserved level. To examine targeting in the grant system, we developed a statistical model to determine the extent to which federal aid in the aggregate is allocated to offset differences between state programmatic needs and fiscal resources (see appendix IV for a description of this model). We did not analyze targeting for individual grant programs because there is less consensus on—and few readily available and suitable proxies for—measures of individual grant program needs and costs. For this reason, our prior work on federal grant targeting has proceeded using a program-by-program approach, with each case study requiring substantial work to identify and validate suitable proxies for state programmatic needs and costs. For this report, our model used state population as the primary measure of state programmatic need. The model also controlled for a variety of other state need indicators, such as measures of poverty, housing age, highway mileage, and service costs. Controlling for programmatic needs and costs enabled us to isolate more accurately the statistical effect of state fiscal capacities on federal grant allocations. In addition to this statistical analysis, our examination of substitution and targeting included (1) a comprehensive review of over 120 journal articles, reports, and econometric studies on substitution, targeting, and grant design factors related to both, (2) a synthesis of 50 econometric studies of federal grants, culminating in the development of point and range estimates of fiscal impact overall as well as for different time periods and grant designs, (3) a review of 23 GAO reports on options to achieve greater targeting in specific formula grant programs, and (4) an analysis of grants for design features associated with substitution and targeting. Our analysis of the design features associated with fiscal substitution was for both all 633 grants and separately for the largest 87 programs representing 95 percent of grant funds. Our analysis of the design features associated with targeting was for the 149 formula grants that represented 85 percent of grant funds. We excluded project grants, which are awarded on a discretionary basis, because of the difficulty of generalizing about targeting based on individual grant decisions. Because grant implementation issues were outside the scope of our analysis, this report cannot be used to draw conclusions about how well a jurisdiction uses grant funds or who benefits. In addition to design features, two important determinants of a grant’s fiscal impact are states’ priorities, which may differ from the federal government’s, and program management, which may differ across states. To illustrate, a grant for computer education programs might feature few of the design features to limit substitution, but shared goals and objectives could result in states using the grant funds they receive to increase substantially total spending on computer education. Or, a grant for health services to low-income children could lack equity factors that target funds to states with higher concentrations of such children. Notwithstanding the lack of targeting, however, a state could still spend more of each federal assistance dollar it receives to serve its low-income children than another state receiving more grant funds. Just as the presence of suitable design features does not guarantee that funds will be allocated efficiently or equitably, so the absence of such features alone does not prove that they are not. We asked well-known public finance experts as well as experts on state and local government to review a draft of this report and incorporated their suggestions where appropriate. Appendix II contains a more detailed description of our scope and methodology. The economic literature we reviewed suggested that three types of grant design features affect the likelihood that states will use federal funds to supplement, rather than replace, their own spending. These features work by (1) restricting the use of funds to specified purposes, (2) requiring recipients to contribute their own funds to obtain grant funds, and (3) not restricting federal matching of state funds. The first type of feature concerns the extent to which grant purposes are restricted. Categorical grants, which fund narrow-purpose activities, such as nutrition for the elderly, are the most restricted. Block grants, which fund broader categories of activities, such as community development, are less restricted. General purpose grants, such as revenue sharing, require only that the funds be spent for government purposes. Generally speaking, experts agree that conditions attached to aid can encourage states to use federal funds as a supplement if the conditions are binding. Conditions are more likely to be binding if states are not already spending their own funds for that purpose. For example, a state with no computer education program in its schools would be more likely to spend a federal computer education grant on its intended purpose than a state that had already invested its own funds in such a program. If the state that had already invested funds in computer education was satisfied with pre-grant spending levels, it would be more likely to substitute the federal grant funds for its own and shift state funds to other priorities. The second type of feature concerns requirements that states contribute their own funds in order to receive federal matching funds. Economic theory suggests that grants requiring matching result in less substitution than those that do not because, by lowering the effective price of aided programs relative to other state spending priorities, they encourage states to invest more of their own funds. Matching grants typically contain either a single rate (e.g., 50 percent) or a range of rates (e.g., 50 percent to 80 percent) at which the federal government will match state spending on an aided program. Experts agree that federal matching rates should correspond to the share of benefits that accrue to non-state residents. Public finance economists have argued that federal shares of less than 50 percent are appropriate, recognizing that in-state residents generally receive the predominant share of the benefits from most federally aided programs, such as education or transportation. Another feature, maintenance-of-effort, requires states to maintain existing levels of state spending on an aided program as a condition of receiving federal funds. By requiring states to maintain a given level of spending from their own funds in addition to the federal grant funds they receive, maintenance-of-effort can prevent substitution in those programs where there is no federal matching requirement or where state spending exceeds the minimum required state match. As we have noted elsewhere, designing effective maintenance-of-effort provisions can be difficult because it requires balancing federal interests against states’ desire for flexibility in planning and implementing grant programs. Experts suggest that maintenance-of-effort provisions should keep pace with both inflation and program growth so that state spending efforts are truly maintained over time. But maintenance-of-effort requirements can penalize states that take the initiative to start programs without federal aid by locking them into prior spending levels when federal grant funds become available. In contrast, states without prior spending programs are implicitly rewarded for their lack of initiative because they would be required to maintain a lower base of spending in exchange for the federal grant. As a result, the prospect of such requirements could defer state program innovation until federal funds become available. The third type of feature concerns the extent to which federal funding for a program is limited. Grants are considered “open-ended” when there is no limit on federal matching, and “closed-ended” when total federal matching funds are capped. The influence of federal matching is essentially the same for both types of grants until a state obtains the maximum federal contribution for a closed-ended grant. After this point, closed-ended grants no longer match additional state spending on aided activities and lose their price incentive. Therefore, a state spending beyond the amount needed to obtain maximum federal funding is doing so without any price inducement. From this economists have concluded that the state would likely have spent some its own funds without the federal matching incentive, and federal funds have substituted for some of the state’s own resources. Although we discuss the influence of grant design in terms of isolated features, in practice they work in combination. For example, when total federal spending for a grant is capped, maintenance-of-effort provisions that track inflation and program growth can increase the likelihood that federal dollars will supplement rather than replace state spending even after the cap is reached. Similarly, a grant which allows a wide range of uses within a broadly defined federal objective may still contain matching and/or maintenance-of-effort features to reduce the likelihood of substitution. Appendix III contains a more detailed discussion of the grant design features that influence state spending. Apart from design features, other factors, such as the amount of state spending relative to federal spending and state programmatic preferences, can also influence the impact of a federal grant on total spending. For example, a non-matching categorical grant without maintenance-of-effort is more likely to supplement state spending in areas where state governments have invested few, if any, of their own funds. Conversely, when a state is already spending an amount from its own resources that exceeds the amount of federal aid for a program, categorical restrictions are less likely to be effective because the state could spend all the grant funds on the intended program, but reduce spending from its own funds by the same amount. In this case, the categorical grant has the same effect on total program spending as an unrestricted grant, and the state will use the resources released in accordance with its own spending priorities, which will not necessarily be the same as the federal government’s. There is a substantial body of econometric research on the impact of federal grants on state spending spanning the period from the late 1950s to recent years. Our review and synthesis of this body of work found that beginning around 1978, a consensus view emerged that each additional federal grant dollar contributed to increased total spending on aided functions, but, because of substitution, total spending increased by less than a dollar. Of the studies we reviewed, three-fourths of the estimates from studies published since 1978 indicated some substitution, i.e., that $1 of federal grant aid did not increase total spending in a state by $1. Estimates from these studies suggest that a median of nearly 60 cents of every federal dollar is used to replace state and local funds that otherwise would have been spent on the aided activity. That is, for every dollar of additional federal aid, states have withdrawn about 60 cents of their own spending. Omitting extreme high and low estimates, the middle 50 percent (mid-range) of these estimates was between 11 and 74 cents. Table 1 contains a summary of the grant impact estimates we reviewed. As shown in table 1, the econometric studies we reviewed support the view that certain grant design features promote relatively more total spending on aided activities. Matching programs generally involved less substitution than non-matching programs. Our synthesis suggests that 85 cents of every additional matching dollar represented new spending, implying that states have withdrawn 15 cents of their own resources. For non-matching programs, 42 cents of every additional federal dollar resulted in new spending, implying that states have withdrawn 58 cents of their own resources. Open-ended programs were associated with the smallest amount of substitution and may even have stimulated additional state spending over and above the amount of federal aid they received. Every additional federal dollar for open-ended matching programs resulted in $1.38 of new spending, suggesting that states have contributed 38 cents of their own resources to such programs. Given that the estimates for open-ended programs ranged from 71 cents (substitutive) to $1.74 (stimulative), caution should be used in drawing the conclusion that such programs generally have stimulated additional state spending. However, as the first column in the table shows, the expenditure impacts of open-ended programs generally exceeded those of closed-ended programs, which resulted in a median of 54 cents of new spending (ranging from 37 cents to $1.04). The studies we reviewed examined the impact of grants on total spending as well as on categories of spending for service areas, such as education, health, highways, and welfare. Our analysis did not provide support for any differences in the expenditure impact of grants across those service areas. Nevertheless, studies from the 1980s and 1990s in the areas of education, highways, and sewage suggested that states have withdrawn some of their own funds in response to federal grants. For example: Education: Craig and Inman’s (1982) study of the impact of an additional dollar of federal education grants to state and local governments on total education spending from federal, state, and local sources found substitution effects ranging from more than a dollar for unrestricted federal grants (total spending decreased $1.06 for each $1.00 in grants) and non-matching, categorical grants (total spending decreased $1.30), to 14 cents for grants with maintenance-of-effort provisions (total spending increased 86 cents). The study found matching categorical grants actually had fiscal impacts on total education spending that were larger than $1.00 (total education spending increased $1.05). Highways: Meyers (1987) and Stotsky (1991) studied the impact of an additional dollar of closed-ended matching grant funds for highway construction on state spending and found substitution rates of 63 and 95 cents, respectively. That is, for every $1 in federal aid, states used between 63 and 95 cents to fund other priorities. Meyers also tested whether the 63 cents of federal funds that was not spent on highway construction was used for tax relief. He rejected that hypothesis, finding instead that states most likely used the funds for other non-aided transportation priorities, such as maintenance. Sewage systems: Jondrow and Levy’s (1984) study of the impact of an additional dollar of Environmental Protection Agency sewage system construction grants on local spending on sewage systems found that local governments substituted 67 cents for their own spending on sewage treatment plants and sewer lines. The authors also estimated the impact of federal grants on sewer lines alone and found complete substitution. The authors concluded that this occurred because, unlike treatment plants which generate benefits for surrounding localities, sewer lines have purely local benefits and would be fully funded even without a federal grant. Therefore, the federal grants simply displaced, rather than supplemented, local spending. Because most of the research we reviewed studied periods when resources and spending were increasing, caution should be used in drawing conclusions about how states would respond to reductions in federal grant spending. Evidence of substitution does not necessarily mean that states would replace cuts in federal grant programs with funds from their own sources. However, states may be more likely to replace cuts in federal funds used to fund ongoing state operations and priorities. From a federal perspective, this state replacement might be viewed as a positive event. But from a state perspective, because federal funds have been woven into the structure of state budgets, replacing cuts in federal funds would require cutting funds for other state programs, raising taxes, or both. Few have studied state responses to federal aid reductions, and those that have provide a mixed picture. Using a case study approach, Nathan (1987) found that state governments replaced funding for some federal programs cut during the 1980s, particularly those that were not highly redistributive, had active constituencies, or were primarily managed by state rather than federal agencies. Our prior work on the effect of reductions in federal grants during the 1980s generally supported Nathan’s conclusion that states replaced some of the federal cuts. We reported that states used three strategies to mitigate federal funding reductions that occurred in most block grant programs during the early 1980s. These involved states (1) taking advantage of available funds from the categorical programs that preceded the block grants, (2) transferring funds among block grants, and (3) increasing the use of state funds. However, in a more recent econometric study of local responses to federal cutbacks during the 1980s, Stine (1994) found that local governments did not raise local revenues to replace permanent losses in federal aid. Currently, the grant system is comprised of 633 conditional grants, of which 617 are narrow-purpose categorical grants and 16 are broader-purpose block grants. The federal government has not provided any unconditional grants since the General Revenue Sharing program, which ended in 1986. Table 2 summarizes the design features of all 633 grants. However, because 95 percent of the funds are associated with the 87 largest grant programs, we also summarized the design features of the 87 largest grants in table 3. Of the 87 largest grant programs, 15 were block (26 percent of funds) and 72 were narrow-purpose, categorical (74 percent of funds). To some extent, then, the federal grant system is designed around narrow federal purposes, suggesting fewer opportunities for substitution. However, if states are already spending more of their own funds than the federal government provides for these block and categorical programs, the purposes for which the federal aid is to be spent are less likely to be binding, and the potential for substitution is higher. With regard to other design features we reviewed, few federal grants contain the combination of design features that would encourage states to maintain their spending levels and reduce the extent of substitution. About half the 87 largest grants, representing 30 percent of the funds for those programs, did not require state matching. Of the grants containing matching provisions, almost all had federal shares in excess of 50 percent. This stands in contrast to expert views that federal shares should generally be less than 50 percent to correspond with the benefits non-state residents receive. In sum, 97 percent of the largest grants—corresponding to 99 percent of total grant funds—had federal shares between 50 and 100 percent. Furthermore, 89 percent of the largest grant programs—representing 48 percent of the funds for those programs ($98.4 billion)—were closed-ended. Excluding the largest open-ended program—Medicaid— from this total, 85 percent of the remaining grant funds were for closed-ended programs. Closed-ended programs may result in substitution when state spending exceeds the amount necessary to obtain federal matching funds. At this level of spending, unless strong maintenance-of-effort provisions are attached, the federal match loses its price incentive, and can become—in effect—general purpose income to states. According to a number of studies we reviewed, state spending for most closed-ended grant programs was well beyond the amount needed to obtain the maximum level of federal funds. Because this additional state spending has occurred without the incentive provided by federal matching rates exceeding 50 percent, the studies concluded that such generous federal matching rates may be unnecessary to induce existing levels of state spending in those areas. Finally, 16 of the largest 87 programs—representing 58 percent of the funds for those programs—had maintenance-of-effort provisions that would encourage states to maintain a defined contribution to those programs. A well-designed maintenance-of-effort provision can deter substitution in a grant program, particularly in those programs with no matching requirement or where state spending already exceeds the amount needed to meet federal matching requirements. To determine if federal maintenance-of-effort provisions were designed to keep pace with program growth, we looked at the top eight closed-ended programs with maintenance-of-effort provisions. We found that none of the maintenance-of-effort provisions sampled were designed to keep pace with inflation or case-load growth. For example, the maintenance-of-effort requirement for the Special Programs for the Aging grant stipulates that states need only spend an amount equal to the average of the 3 previous fiscal years in order to avoid reduced federal funding. States could maintain spending at this historical average and still substitute. Substitution could occur if states use new or increased federal funds to finance case-load growth or inflation they otherwise would have had to finance. Tables 4 and 5 summarize by budget function the design features of all 633 grants and the largest 87 grants, respectively. Given large and chronic federal budget deficits, some might argue that high rates of fiscal substitution are inappropriate because the federal government should not be collecting taxes on behalf of states only to return the funds in the form of unrestricted aid. Others might argue that this substitution serves the purpose of providing budgetary relief to the states. They might also prefer that the fiscal relief be allocated to more fiscally stressed states. These are policy questions that only the Congress can decide. If policymakers seek to target aid to fiscally stressed states, the question arises as to whether such aid is allocated to those states with relatively greater programmatic needs and fewer fiscal resources. We examined whether existing federal grant allocations can be justified on the grounds that they provide budgetary relief to fiscally stressed states.We found that, controlling for differences in programmatic needs, grant allocations to states were not significantly higher for states with relatively fewer fiscal resources. Specifically, the variable we used to measure fiscal capacity—total taxable resources—was not a statistically significant factor in targeting funds to lower-capacity states, controlling for differences in state (1) program needs, such as poverty, population under age 18, and highway miles, and (2) service costs. In effect, this means that the current grant system does not help lower-capacity states provide levels of aided services comparable to higher-capacity states. To illustrate the lack of a relationship between fiscal capacity and grant allocations, we ranked the states according to an index of their per capita federal grants, adjusted for costs, and calculated averages for five groups of 10 states each (quintiles). For example, a state with an average per capita grant would have an index value of 1.0. We found that state quintiles that ranked the lowest (0.85) and the highest (1.85) according to their grant allocations had similar average fiscal capacities. We were unable to estimate accurately the effect of the individual need variables in our model on grant targeting. While three of the need variables were statistically significant, the results should not be used to draw conclusions about their relative importance. Reliability questions arose because—in contrast to fiscal capacity—there was no single or aggregate measure that accurately represented the program goals and objectives of all the grants we analyzed. Used in combination, however, the need variables provided a valid control to isolate the effect of needs from fiscal capacity on grant allocations. Even so, our prior work on a wide range of individual grant programs suggests that need factors, in addition to costs and fiscal capacity factors, have not played an important role in allocating funds. For example: The Community Development Block Grant program (CDBG) is intended principally to serve low and moderate-income communities and those with relatively greater community development needs. The CDBG formula uses poverty, age-of-housing, and community population growth rate statistical factors to allocate funds to meet those needs. However, while Greenwich, Connecticut, and Camden, New Jersey, are comparable with respect to the age of their housing stock, Greenwich was allocated CDBG funds of $0.69 per person in poverty in 1995—over five times more than Camden’s $0.13. Greenwich, with per capita income of $46,070, could more easily afford to fund its own community development needs than Camden, with per capita income of $7,276—about half the national average. Funding shares for the four largest highway grant programs are determined by a complex, 13-step set of calculations, which provides funds for highway construction or maintenance needs, but subsequently adjusts the total funds designated for all four programs so that states receive their historical share of total funds. While individual calculations are made for three of the four separate programs, the funding for these programs is interdependent since a state’s total share of funding for all four programs is fixed. This results in some states receiving more funds than would be provided if only need factors had been used. The Older Americans Act grant formula distributes funds according to the number of people over 60 years of age, but does not take into account the fact that states with higher concentrations of elderly poor, minorities, and individuals over 85 years of age have higher disability rates. The Ryan White Comprehensive AIDS Resources Emergency Act of 1990 double counts the number of cases residing in eligible metropolitan areas. Although recent legislative changes have reduced the double-counting, the needs indicators still favor more urbanized states. As a result, the oldest eligible metropolitan areas receive more generous funding, and newly emerging areas with more recent growth in AIDS cases receive less funding. The Maternal and Child Health Block Grant directed more aid to states with lower concentrations of low-birthweight babies than to those with higher concentrations. Similarly, more aid was directed to some states with lower health care costs than to those with higher costs. Most of the formula grants we reviewed did not use a combination of the three grant formula factors we have reported can improve targeting of federal aid. Nearly 95 percent of the 149 grant formulas we reviewed, representing 99 percent of formula grant funds, used a measure of need. However, only 15 percent of grant formulas, representing 61 percent of funds (7 percent excluding cash welfare and Medicaid), used both need and fiscal capacity factors. Finally, only 2 percent, representing less than 2 percent of funds, used a combination of need, fiscal capacity, and cost factors. As we noted earlier, where the federal government seeks a minimum or more comparable level of services for all potential beneficiaries—regardless of where they live—the inclusion of a fiscal capacity factor helps to reduce the disparities between the abilities of wealthier and poorer communities to provide such service levels. Cost factors help ensure that states facing higher service costs are compensated for these differences, which contributes to comparability in aided service levels. The lack of targeting factors was not concentrated in any one budget function we reviewed. However, grants that have historically comprised the social safety net were more likely to include data elements that reflect fiscal capacity as well as need. About 24 percent of grants, representing 75 percent of funds (8 percent excluding cash welfare and Medicaid), in the education, income security, and health functions used need and fiscal capacity factors. Only 3 percent of grants in those functions (less than 2 percent of funds) also used a cost factor. In comparison, grants for other budget functions were less likely to use a combination of targeting factors. Notably, no grants in the natural resources, transportation, administration of justice, agriculture, community and regional development, veterans, or energy budget functions used fiscal capacity or cost factors in their formulas. Table 6 summarizes how the three targeting factors were combined in the 149 formula grants we reviewed, both in total and by budget function. The fact that a combination of the three targeting factors did not appear in most grant formulas, and fiscal capacity did not play a significant role in explaining the variation in grant funding to states, raises the logical question as to what factors did influence grant allocations. In this regard, the most significant as well as reliable explanatory variable in the grant targeting model was one that indicated whether or not a state was very small. This variable was a proxy for states that benefit most from formula hold harmless provisions and guaranteed funding floors, which have the effect of providing a minimum grant to each state regardless of its size. The results indicated that a very small state with average needs and fiscal capacity would receive per capita grant funds 20 percent higher than a larger state with the same needs and fiscal capacity. Finally, despite our finding that many grant formulas contained need factors and some contained fiscal capacity and/or cost factors, the measures used to allocate funds were often poor proxies for the three factors. For example, 28 of the 149 grant formulas we reviewed used a state’s share of the U.S. population as a proxy for need. Generally, population is a poor proxy for program needs because when population is used funds are allocated to states in proportion to the number of people in the state, which is not necessarily the same as the number of people who actually need a particular program’s services. Also, per capita personal income is a frequently used but poor proxy for fiscal capacity because it does not comprehensively measure state income. Specifically, it fails to capture income produced, but not received, in a state. Appendix V provides a more detailed discussion of the targeting problems that result when poor proxies of need, fiscal capacity, or cost are used. Our analysis suggests that most grants are designed neither to reduce substitution nor to target funding to states with relatively greater programmatic needs and fewer fiscal resources. This is an indication that the federal government may be getting less fiscal impact than it could from the dollars it spends. Our literature synthesis implied that each additional federal grant dollar results in about 40 cents of added spending on the aided activity. This means that the fiscal impact of the remaining 60 cents is to free up state funds that otherwise would have been spent on that activity for other state programs or tax relief. Grants are not the only type of federal subsidy tool in which design issues have undermined fiscal impact. Our prior work has shown that programs implemented through subsidies, such as loans and tax expenditures as well as grants, sometimes fall short of expectations because federal funds are transmitted through a network of third parties who have their own spending priorities or who would have undertaken subsidized activities anyway. Given the complex and evolving relationship between the federal and state governments and their shared responsibilities for most domestic programs, it is understandable that observers will have different views of substitution. Some might see the substitution we identified as reasonable, given differences in state and federal priorities and a desire to provide states with managerial flexibility. As economists have shown, some substitution is to be expected whenever a grant is received—whether the funds go to an individual, an organization, or a state government. From the perspective of a recipient, the funds are simply additional income, to be used according to the recipient’s own preferences, within the limitations imposed by the grant. This is why a grant’s design together with the degree of state commitment to federal priorities determine the ultimate fiscal impact of federal grant dollars. Also, in our federal system the balance of domestic responsibilities may be shifting toward the states. Thus, providing states with a measure of fiscal relief, albeit indirectly, could be considered a legitimate role for the federal grant system. Others might argue that if the provision of fiscal relief is to be the primary goal of the federal grant system, then this relief should be allocated in a manner that allows for adequate oversight and control by the Congress. If fiscal relief is accepted as a policy goal, there are a variety of alternatives available to the Congress to allocate this relief. The alternative we examined would target the relief to states with greater programmatic needs and fewer fiscal resources. Our analysis showed that existing grant formulas do not allocate federal aid to states in a targeted manner. This may have occurred because grant formulas or eligibility rules were constructed too broadly, grant floors and ceilings allocated funds too widely, or the circumstances that created a need for the program may have changed. Notwithstanding the importance policymakers may place on providing states with fiscal relief, the question remains as to whether the federal government can afford this approach and still accomplish objectives of national importance in an era of increasingly scarce federal resources. The issues we have raised concerning grants are part of a larger problem of how to improve government performance concurrent with downsizing. A focus on cost-effectiveness will be especially important as agencies implement the Government Performance and Results Act of 1993, thus turning the federal government’s focus to outcome-based measures of grant performance. As a consequence, it will be increasingly important to design grant programs so that the federal dollars needed to produce desired outcomes reach their intended targets. Moreover, substitution raises questions about the federal role in the federal system. In many cases, the federal government created grant programs because of the view that states were not funding certain services to a degree consistent with national, rather than purely local, policy objectives. However, the difference in priorities that provides the rationale for such grants also makes it more likely that states will attempt to use grant dollars to replace their own funds, thus converting specific-purpose aid to general fiscal relief. While the federal government may still wish to pursue national objectives in these areas, it should be recognized that, because of substitution, such objectives may be costly to achieve. The potential for substitution may increase when the federal government chooses to finance areas in which state spending is already significant. Historically, initial federal involvement in funding state spending in an area may have occurred when little or no state funds were being committed, thus prompting states to commit resources for the first time. But as states’ commitment to funding those areas has grown over time, or the federal government has chosen to enter an area where state spending has traditionally been large, the potential for substitution may have grown as well. There are many factors that must be reconciled in considering the budgetary implications of grant design. Taking one path, the Congress could consider redesigning grants to reduce substitution and increase targeting. For example, to reduce substitution and increase the likelihood that federal grant funds lead to greater total spending on aided programs, greater use of state matching, with reduced federal shares, and maintenance-of-effort provisions that track inflation and program growth can be considered. However, as previously noted, policymakers would need to consider the potential losses in state spending flexibility that could occur as a result of adding spending restrictions. Also, if formula grants were redesigned to include a combination of targeting factors, a larger share of federal aid could be allocated to those states and communities with relatively greater programmatic needs and fewer fiscal resources. We recently reported that greater targeting of grant formulas offers a strategy to bring down federal outlays by concentrating reductions on jurisdictions with relatively fewer needs and greater fiscal capacity to absorb cuts. Taking a different path, the Congress could use information about the relative performance of grant programs to consider which programs may have outlived their usefulness. The Congress may decide that the benefits of particular programs are not being achieved in a cost-effective manner due to substitution and a lack of targeting. Accordingly, the Congress may decide that such programs no longer represent the best use of scarce federal resources. Targeted reductions based on the relative performance of federal programs can help promote a government whose responsibilities are better matched to the resources available. Such reductions could be used either to cut the deficit or invest in other federal programs that the Congress judges to be more cost-effective. However, because the evidence on whether states would replace reductions in federal grant funds is inconclusive, and because replacing federal funds would mean reductions to other state programs or increases in state taxes, the Congress would need to consider the costs and benefits of individual programs carefully in selecting which programs to reduce or eliminate. As arranged with the Committee, we are sending copies of this report to the Director of the Office of Management and Budget, cognizant congressional committees, and other interested parties. We will also make copies available to others upon request. The major contributors to this report are listed in appendix VI. If you have any questions, please call me at (202) 512-9573. Federal grants have historically served as vehicles through which the federal government attempted to achieve a variety of national goals by providing funding to other levels of government to carry out specific federal policies. In particular, economists have cited the role federal grants play in encouraging state and local governments to provide more of the public goods and services deemed beneficial from a national—rather than a purely state—perspective. From the perspective of economic theory, federal grants can play an important role in stimulating spending in areas where public benefits or costs cross jurisdictional lines. The problems addressed by the grant system in these types of situations are termed positive and negative externalities, respectively. When a jurisdiction does not receive—that is, consume—all the benefit from a public good it produces because some of the benefit accrues to non-residents, the jurisdiction has little incentive to produce the good in sufficient supply to meet society’s total demand. According to this logic, taxpayers from a sparsely populated state would likely be unwilling to spend their scarce tax dollars to construct and maintain highways in their state large enough to support private and commercial traffic from other states. If other states followed the same thinking, the highway system would be inadequate from a national standpoint because state taxpayers do not share the benefits that accrue to non-residents traveling through their states. Because individual states are unlikely to supply the quantity and quality of interstate highways demanded by interstate travelers, federal grants to states for the construction and maintenance of highways can be used to induce the states to fulfill this need. Economists also argue that federal grants can play a role in distributing income to communities with higher social service needs and smaller tax bases. Some states have higher concentrations of poor people or other service populations and smaller tax bases with which to pay for their own service needs. Accordingly, significant disparities can arise either in the level of services states provide or in the tax burdens states incur to provide a given level of services. Some experts suggest that such fiscal disparities across states argue for a federal role in helping states with greater needs. Federal grants can satisfy this objective by allocating aid to states through formulas that provide relatively greater funding to states with higher needs and lower fiscal capacities, such as occurs with Medicaid. Or, according to the logic of the General Revenue Sharing program, they can provide broad funding designed primarily to reduce disparities in fiscal capacities across communities. Another goal for federal grants is supporting state spending on goods that are deemed meritorious from a national perspective and should therefore be available to all. Unlike redistributive grants, grants for merit goods tend to be for specific categories of goods, such as the arts, gifted and talented educational programs, or assisted housing. Federal grants have played a variety of roles beyond those most frequently cited by economists. Increasingly, grants have become a vehicle for implementing the federal government’s regulatory agenda at the state and local level. By attaching conditions to aid, the federal government has sought to achieve a variety of goals, such as reduced discrimination, increased highway safety, reduced energy consumption, and reduced pollution. Economists have also argued that federal grants, such as unemployment insurance, can play a role in stabilizing economic swings that occur at the state and local levels during recessions, when demand for public services rise as revenues decline. The public administration perspective has shifted in recent years to include a more business-like approach to intergovernmental aid. For example, some have argued that grant awards should be provided in a competitive manner based in part on whether a recipient achieves performance goals. Finally, states have been increasingly vocal about the need for federal grants with fewer restrictions on how funds are to be spent so that the state can address the unique needs of its citizens and provide quality and cost-effective services. This report examines the extent to which the federal grant system succeeds in two fiscal objectives often cited by public finance experts. First, do grants succeed in encouraging states to use federal dollars to supplement rather than replace their own spending on nationally important activities? The use of federal grant dollars to replace a state’s own spending is frequently referred to as substitution. Second, do grants succeed in reducing differences—or mismatches—between states’ fiscal resources and programmatic needs? This appendix details the scope and methodology we used to answer these questions. To address substitution, we (1) synthesized the published economic and political science literature regarding the influence of federal grants on state spending, (2) identified dimensions of grant design that influence the extent of substitution, and (3) evaluated the quantitative estimates of the fiscal impact of federal grant spending reported in the literature. To identify the universe of grant programs and catalog their design features and other characteristics necessary for our analysis, we used information from the Catalog of Federal Domestic Assistance, reports by the Advisory Commission on Intergovernmental Relations, the United States Code Annotated, and the United States Code of Federal Regulations. The 633 grants we identified represented the total of grants available to state governments in fiscal year 1994. One part of our analysis focused on the theory underlying the influence of grants on state spending decisions. We began with five summary reviews of the literature, and, because the last of these was published in 1985, we also searched computerized indexes for more recent studies. From this body of work, we identified three dimensions of grant design that influence the impact of federal grants on state spending: whether a grant was unrestricted or restricted to a specific purpose, whether or not a state contribution was required—either in the form of matching federal payments or maintaining the level of fiscal effort that existed prior to the grant, or whether or not there were ceilings on the total the federal government would pay out on matching grants. We also identified articles that provided information on grant impact for different service areas, such as education, health and hospitals, highways, social services, and welfare. We collected this information to determine whether grants for different service areas had different impacts, apart from the impacts associated with different grant designs. Next, we identified articles containing quantitative estimates of the impact of federal grants on state spending and assembled the information in a database. Each observation in the database was an estimate from a study, some studies providing multiple estimates. For each observation, we recorded key information from the study (e.g., author, date, sample type, model used, grant impact estimates, statistical significance of the estimates, potential biases, and estimated price or income elasticities). When studies provided information about the grant design features or functional categories of spending, we also recorded that information, including (1) grant form (categorical, block, unrestricted, or all), (2) matching or non-matching, (3) open-ended or closed-ended, (4) the presence/absence of maintenance-of-effort (MOE) provisions, and (5) grant service area (all, welfare, highway, education, health/hospital, or social services). Using this database, we compared the reported estimates of grant impact for (1) studies completed during different time periods, (2) studies using different sample types, (3) grants with different designs, and (4) grants for different service areas. First we calculated the mean, the median, and the 25th and 75th percentile observations (the mid-range) of all the estimates in our database. Then we extracted subsets of the database that contained the grant design features we were assessing. For example, to summarize the estimated expenditure impact of grants characterized as “matching,” we extracted all records for which the “matching” field contained a “yes” and calculated the same descriptive statistics. We compared the results for matching grants to non-matching grants, open-ended to closed-ended, etc. Table II.1 summarizes the results for the different time periods, grant design features, and sample types we analyzed. Table II.1: Summary of Econometric Estimates of the Impact of an Additional Dollar of Federal Grants on Total Spending for Aided Activities Impact of $1 in federal grants on total spending(Substitution) or increase implied by estimate(0.58) (0.46) (0.15) (0.58) (0.67) $(0.27) The year a study containing an estimate was published. The periods of state and local spending examined in the studies ranged from 1942 to 1990, but centered on the 1960s to the 1970s. Across all time periods. Only two estimates were characterized as pertaining to unrestricted grants and only one as pertaining to grants with maintenance-of-effort. Therefore, we did not include those results as separate subsets. One year of data across all states. Aggregate state data across multiple time periods. Data across all states for more than one time period. Similar to the aggregate results in the table, estimates of federal grant impact by service area were generally higher in earlier periods of study and lower in more recent years. Because our analysis did not provide support for any differences in the expenditure impact of grants across different service areas, or apart from the other features we examined, we did not report those results. To assess whether grants contained the design features associated with substitution, we developed a second database of the 633 grants available to states in fiscal year 1995. We obtained the data from an 1995 Advisory Commission on Intergovernmental Relations (ACIR) study of the federal grant system, entitled Characteristics of Federal Grant-in-Aid Programs.This study provided summary information on the matching rates and the open-ended versus closed-ended status of individual grant programs. ACIR also provided us with additional unpublished support schedules identifying grants that contained MOE provisions. ACIR’s data did not include spending information for each grant. Therefore, we obtained fiscal year 1994 estimated obligations for each grant from the electronic version of the 1994 CFDA database. We sorted and tallied all 633 grants as well as the largest 87 grants, representing 95 percent of grant funds, and their obligations according to whether they were (1) matching, (2) closed-ended, and (3) had MOE provisions. For matching grants, we also tallied those with federal shares greater than 50 percent. We compared these counts and sums to the total for the database or for the largest 87 grants. MOE provisions are more effective when they are designed to maintain state fiscal effort at a level that keeps pace with inflation and program population growth. To determine whether MOE provisions in grants are designed this way, we searched the CFDA database for grants that contained MOE provisions. Of the 28 programs we found, we examined only closed-ended programs because the matching rates that drive state contributions for open-ended programs would override the influence of an MOE provision. We ranked the closed-ended programs by their funding and selected for review the eight largest, constituting 92 percent of the funding for those programs. To ensure that MOE provisions for the eight grants we reviewed were up-to-date, we cross-referenced the public laws and their amendments to the relevant United States Code Annotated and/or the United States Code of Federal Regulations. We then analyzed the MOE provisions to determine what they entailed and whether they accounted for inflation or program population growth. To address targeting, we reviewed an extensive body of GAO case studies of formula grant programs and conducted our own aggregate analysis. For one part of the aggregate analysis we used a multivariate regression model to quantify the extent of targeting in the overall grant system. This model and its results are presented in appendix IV. For the other part of the aggregate analysis, we created a database of the 149 formula grants compiled from the 1994 CFDA. This database included information on whether a grant contained any of the three grant design features GAO has reported can target grants to jurisdictions with relatively greater disparities between fiscal resources and programmatic needs. These are fiscal capacity, cost differentials, and indicators of program needs. To clarify certain CFDA data or obtain missing information, we interviewed agency officials and searched relevant portions of the U.S. code. We sorted and tallied the database according to the three targeting factors for all the grants and within 12 budget functions, and we calculated the share of formula grant programs containing the individual factors and the factors in combination. This part of our analysis was limited to the universe of 149 formula grants, representing 85 percent of federal grant funds to states in fiscal year 1994. Project grants—comprising most other federal grant spending—also could be examined from a targeting perspective. However, that analysis would have required us to determine whether agency funding decisions reflected differences in competing grant applicants’ fiscal capacities, program needs, and service costs. Moreover, funding decisions for project grants apply only to individual project applications, thereby limiting our ability to generalize from such decisions. In contrast, formula grants allocate funds according to a prescribed formula and are of a continuing nature. Therefore, our analysis of formula grant targeting could be limited to a relatively straightforward analysis of grant allocation formulas for the three targeting features we identified. Because your question concerned grants that funded programs, we excluded grants that exclusively funded administrative and/or planning activities. Further, we eliminated grants paid to states in lieu of real estate taxes owed on federal property located in a grantee’s jurisdiction because targeting factors are not relevant criteria for allocating such grant funds. We performed this review in accordance with generally accepted government auditing standards. We conducted our review from June 1995 through June 1996. In this report we discussed three grant design features that are related to substitution. This appendix discusses these features from an economic theory perspective. First, we provide an overview of the grant spending impacts that are predicted from the framework of the general consumer demand model. Thereafter, we review how the individual features work in theory either to stimulate state spending or increase substitution. Over the past 30 years, economists have adapted general consumer demand theory to model how a government’s expenditure patterns are likely to change in response to a grant. In that theory consumers are assumed to maximize their individual welfare subject to their preferences for the goods and services available to them, the prices they must pay for the goods, and the resources they have to spend. Thus, for grants, the model depicts a government which may “purchase” (1) goods aided by a grant, (2) all other public or private goods, (3) or some combination. The quantity of goods the government can purchase is constrained by a budget consisting of its own revenues plus additional revenue from federal grants. The model demonstrates how the government would purchase as much of the aided and non-aided goods it could afford, within its budget constraint in accordance with the taxpayers’ collective preferences. How much more of an aided good a government purchases using its additional grant income depends on two factors: (1) taxpayers’ preferences for the aided good relative to other goods the government could purchase with the additional resources and (2) the incentives to purchase aided rather than non-aided goods that are built into the grant. According to economic theory, there are three types of incentives that can be used to encourage grant recipients to increase total spending on aided goods. As shown in figure III.1, the incentives work by restricting the use of funds to specified purposes, requiring recipients to contribute their own funds to obtain grant funds, and/or providing unrestricted federal matching of state funds. (categorical/block) The theory also states that the effectiveness of these incentives also depends on the budget priorities of state taxpayers. For example, if a community does not share federal priorities for spending on pollution control, the federal government may have to build into the grant more restrictions or incentives than if federal and community priorities were better aligned. Among the various types of federal grants, unrestricted grants do not stipulate what grant funds must be spent on and therefore provide the most discretion to recipient governments. Unrestricted grants—also known as unconditional or general-purpose grants—are pure income transfers from the federal government to recipients that do not stipulate what grant funds must be spent on or require any contributions from recipients’ own funds. Such grants provide the most discretion to recipient governments. The General Revenue Sharing program of the 1970s and 1980s is an example of an unrestricted grant. The program provided funds that could be used for virtually any governmental purpose. In theory, unrestricted grants are intended to help overcome geographical inequalities in fiscal well-being, rather than stimulate public spending for specific purposes. To achieve this objective, an unrestricted grant would provide more funds to jurisdictions with relatively low tax bases and high needs for public services and fewer funds to more fiscally sound jurisdictions. In contrast, conditional grants limit recipient discretion through restrictions designed around program goals, some of which are broader than others. Both categorical grants and block grants are considered conditional. However, while categorical grants feature narrowly-prescribed objectives, block grants authorize funds to be used for a wide range of activities within broadly-defined functional areas. Economic theory holds that conditional grants encourage more total spending on grant activities than unrestricted grants, and that unrestricted aid is more likely to be used for tax relief. To understand why this is so, consider the different spending responses of recipients to a gift certificate from a sporting goods store compared to an equivalent amount of cash. A gift certificate that exceeds the amount recipients normally would spend on sporting goods will tend to boost their total spending on sporting goods. With cash, they are likely to spend each additional dollar of income according to their preferences for all goods. Spending on sporting goods could be a small share of each additional dollar, such as 5 cents. In reality, communities receive federal grant dollars, not gift certificates, and these dollars are fungible with other community resources. For this reason, economists have concluded that grant recipients rarely are wholly constrained by the legal conditions attached to a grant. Rather, there will likely be an element of substitution in every grant as recipients find ways to replace their own funds with federal funds, freeing up local resources for other purposes. Overall, economic theory recognizes that $1 in conditional grants will not necessarily result in an additional dollar of state spending on the grant activity. Substitution also occurs when a community may have planned to spend more of its own resources on a particular purpose, even without a grant. In such cases, a conditional grant simply increases the budget available to the community and becomes, in effect, added income similar to the income provided through an unconditional grant. In this situation a community can substitute some or all of its conditional grant funds for other purposes, including tax relief. To extend the gift certificate analogy, the holder may have been planning to buy sports equipment before receiving the certificate. Because the gift certificate can replace the cash the holder was planning to spend on sporting goods, the holder has, in effect, received a grant of additional income that can be used for purposes unrelated to sports. A sports enthusiast may add the certificate to what she was planning to spend on sporting goods; someone else with less enthusiasm for sports may use the gift certificate to replace all of his planned spending. Some federal grants include matching provisions that require states to share the cost of providing the aided service with the federal government. For example, a matching grant may require states to spend 50 cents from their own revenue sources for each dollar of federal funds provided. Thus, 50 cents in state spending on a matching program yields $1.50 in program funds. Non-matching grants, in contrast, provide funds to recipients without any requirement for state cost-sharing. According to economists, matching grants encourage more state spending on aided goods that non-matching grants, other factors being equal. Both matching and non-matching grants provide additional income to recipient governments. Because grant funds are partially fungible, this income, like any other type of income, permits recipients to consume more of both aided as well as non-aided activities according to their preferences. However, matching grants, in addition to providing additional income, also lower the “price” to the recipient government of the aided good relative to the other goods it could purchase with the funds. For example, with federal matching of 75 percent of total spending, a state could spend 25 cents on an aided good and obtain 75 cents in federal funds, for a total maximum increase in spending of $1. Without matching, another dollar of spending on an aided good still costs a dollar. Therefore, the same federal subsidy of 75 cents yields a maximum of only 75 cents of total additional spending. How effective a matching grant will be in increasing a recipient’s spending depends on the recipient’s preferences for aided versus non-aided activities (including tax relief). If a recipient wants more of an aided activity, such as a computer education program, the price effect may produce a strong spending response. For activities the recipient desires less, the price effect may be less. In the extreme, if the recipient does not want more of an aided activity, the price effect will be negligible. The use of maintenance-of-effort provisions can help make up for the lack of a price effect in non-matching grants by requiring states to continue a designated spending level from their own sources in order to receive the federal assistance. Because states must maintain a prescribed level of spending, their ability to substitute federal funds for their own is limited. Over time, however, increases in the population served by the program, inflation, and other determinants may cause federal spending for the program to rise. Therefore, to retain its effectiveness as an incentive for states to contribute their own funds, a maintenance-of-effort provision should contain adjustment mechanisms so that required state contributions keep pace with such trends. For most federal matching grants, the federal share of total spending is limited to a fixed amount or ceiling. Such grants are considered “closed-ended.” Thus, any state spending beyond the amount needed to obtain the maximum of federal funds occurs without any incentive in the form of a price reduction resulting from the federal match. Closed-ended grants may also contain maintenance-of-effort provisions, which require state or local governments to maintain a prescribed level of expenditures from their own sources on the aided function. In theory, maintenance-of-effort provisions have an impact similar to a matching requirement since the recipient must continue to spend from its own resources on the aided function at a required level to receive additional federal aid. For a few federal matching grants, the federal share of program spending is unlimited—or “open-ended.” Open-ended grants consist primarily of a few large entitlement programs, such as Medicaid and Foster Care. The federal government has limited control over the amount of spending on open-ended grant programs, mainly through variations in the strictness of the grant eligibility requirements. According to economists, a closed-ended matching grant will be as stimulative as an open-ended matching grant as long as state spending on the aided activity remains below the level needed to obtain the maximum federal contribution. In this case, a closed-ended grant has the same stimulative income and price effects as described for a matching grant. However, the fiscal impact of a closed-ended grant will be different when state spending on the grant activity is above the federal grant ceiling. In this situation, the price reduction created by federal matching is eliminated for the additional spending beyond the limit of the federal contribution. Therefore, the grant has only an income effect, and grant funds simply add to the total resources of the community with an effect equivalent to an unconditional grant. The community can substitute part or all of the grant funds for its own spending and has full discretion over the use of the freed-up resources. As previously described, effective maintenance-of-effort provisions, which track inflation and program growth, can make up for the loss of the price incentive for a closed-ended, matching grants when spending is beyond the federal limit. As part of our targeting analysis, we sought to determine if current federal grant formulas allocate funds in a manner that targets states with greater mismatches between programmatic needs and fiscal resources. To do this, we developed a grant targeting model, we modified the model to reflect the influence of funding floors and hold harmless formula provisions, and we tested the model using a statistical technique known as multiple regression. The regression analysis enabled us to estimate the influence of state fiscal capacity, apart from the influence of the other independent variables, on per capita federal grant allocations to the 50 states. We found that, after controlling for indicators of program needs, such as poverty, population under age 18, and highway miles, and for service cost differentials, fiscal capacity did not play a statistically significant role in allocating aid to states. In fact, the most significant variable in the model was a proxy for the presence of funding floors and hold harmless provisions in grant formulas. The remainder of this appendix discusses, in technical detail (1) the theory that provided the basis for our analysis and the specification of a grant allocation model suitable for estimation using multiple regression, (2) the data we used to estimate the grant targeting model, and (3) the results of our analysis. In theory, targeted grants should correct for differences in the fiscal conditions of state governments so that taxpayers in less wealthy states can provide comparable services at comparable tax rates to wealthier states. Under the theory of grant targeting, a state’s fiscal condition can be described in terms of expenditure needs compared to revenues. Technically, this is defined as the gap between the revenues that can be raised from local sources with an average tax burden on local residents (i.e., fiscal capacity) and the expenditures required to finance an average level of public services (i.e., needs). States with positive gaps are regarded as being in better fiscal condition to provide services than those with negative gaps. States with average fiscal capacities and average service needs are in the middle. In a theoretical redistribution scheme, states with positive gaps would transfer resources to those with negative gaps through an unconditional grant or transfer of funds. In practice, grants are allocated from a general fund at the federal level and distributed to eligible states for particular purposes according to a formula. The design of a grant targeting formula will depend on the type and degree of equity desired. There are two types of equity policymakers can consider—beneficiary equity and taxpayer equity. To achieve beneficiary equity, grant funds would need to be allocated in proportion to each state’s potential program needs and adjusted for differences in service costs. Achieving taxpayer equity requires considering fiscal capacity in addition to the needs and cost factors used to achieve beneficiary equity. Beneficiary and taxpayer equity cannot be achieved simultaneously. Maximizing beneficiary equity provides equal federal funding per beneficiary, resulting in unequal taxpayer burdens across states. Maximizing taxpayer equity equalizes state taxpayer burdens, resulting in unequal federal funding per beneficiary. Another equity goal falls between achieving either full taxpayer or full beneficiary equity, whereby differences in state taxpayer burdens are reduced but not totally eliminated by allowing some differences in funding per beneficiary across states. In prior work we referred to this goal as “balanced equity.” The model in figure IV.1—which we refer to as the grant targeting model—incorporates the need, cost, and fiscal capacity factors, consistent with achieving balanced equity. G = per capita grant allocation Need = program need indicators, such as poverty rates, population of school age children, unemployment rates, etc. = coefficients representing the relative influence of each need indicator and the fiscal capacity indicator on the grant allocation FC = per capita fiscal capacity C = cost of public services subsidized by federal grants According to the grant targeting model, the dependent variable is per capita grant allocations to states, adjusted for costs (G/C). The independent variables are a variety of state program need indicators (Need) and state per capita fiscal capacity, also adjusted for costs (FC/C). The hypothesis implied by the model is that the dependent variable, G/C, would be a positive function of need; i.e., states with greater needs should receive larger per capita grants. In contrast, the model implies that the dependent variable would be a negative function of fiscal capacity; i.e., states with greater resources to provide program services on their own would receive smaller per capita grants. Our objective for estimating the grant targeting model was to determine the extent to which the fiscal capacity variable explained the variation in the allocation of federal funds to states, controlling for a variety of plausible indicators of state program needs and cost differentials. Therefore, we tested the hypothesis that the fiscal capacity variable would have the predicted negative sign and be statistically significant. We included the need indicators primarily as control variables that would enable us to more accurately assess the impact and significance of the fiscal capacity variable. Our ability to accurately estimate the impact of the model’s need factors on aggregate grant allocations was limited. In contrast to the fiscal capacity variable, there is no single or aggregate measure that accurately represents the program goals and objectives of all the grants in the system. Therefore, it was difficult to determine the effects an individual needs indicator, such as the school age population, had on the allocation of aggregate grant funds. Because each grant program uses a unique set of factors to allocate funds, a particular need indicator used to distribute funds for one program may play no role in other programs. Consequently, in estimating the influence of a variety of need indicators on aggregate grants allocations, the effects of the need indicators may, to a certain extent, cancel one another out. Thus, the statistical significance or insignificance of a particular need indicator in this analysis does not provide an adequate basis for drawing conclusions about its relative importance in the allocation of federal grants. However, used in combination, the need variables provided a valid control to isolate the effect of needs from that of fiscal capacity on aggregate grant allocations. The grant targeting model describes the allocation of grant funds as a function solely of state needs and fiscal capacities, adjusted for costs. However, many grants contain funding floors and hold-harmless provisions that guarantee each state a minimum grant allocation, regardless of their needs and fiscal capacities. This has the effect of providing smaller states greater per capita grant allocations than larger states. Therefore, in specifying the model, we created two dummy variables representing very small states (those with populations less than .25 percent of the total United States population), and small states (those with populations between .25 percent and .5 percent) to serve as proxies for the influence of funding floors and hold-harmless provisions on grant allocations. When two variables have a joint effect over and above the effects of each factor separately it is considered “interaction.” Given the presence of funding floors in most federal grant formulas, we thought it likely that one or both of the dummy variables would be statistically significant. Therefore, to test whether the effect of fiscal capacity was significantly different for the smaller states, we included interaction terms to provide separate fiscal capacity coefficients for very small, small, and all other states. We also deflated the two fiscal variables, per capita grant allocation and fiscal capacity, by an input-cost index to control for the different costs states face in providing program services. Finally, all variables were constructed as indexes, having weighted average values of 1.0. Measuring all variables as indexes allowed the regression coefficients in the statistical model to be interpreted as elasticities (i.e., the percent change in the dependent variable—per capita grant allocation—in response to a 1 percent increase in an independent variable from its mean value). This facilitated the interpretation and reporting of results and minimized problems of multicollinearity among the independent variables. Figure IV.2 shows our specification of the grant targeting model. We used data for the 50 states for 1994 for per capita grants, U.S. population, population under age 18, population over age 60, wages, unemployment, lane miles, vehicle miles, and housing. For minority and urban populations we used 1990 data. Finally, we used average 1992-1994 data for fiscal capacity and the population in poverty. Table IV.1 defines the variables used to estimate the model. Variable (Index) A state’s per capita grant allocation divided by (1) the U.S. average per capita grant allocation and (2) the rental/wage cost deflator (c), which adjusts for state differences in the costs of providing services. Fiscal capacity (TTR) A state’s average per capita total taxable resource base divided by the U.S. average per capita resource base, all divided by the rental/wage cost deflator (c). The share of a state’s average population living under the poverty line divided by the share of the U.S. population living under the poverty line. The share of a state’s population that is unemployed divided by the share of the U.S. population that is unemployed. The share of a state’s population classified as minority divided by the share of the U.S. population classified as minority. The share of a state’s population living in urban areas divided by the share of the U.S. population living in urban areas. The share of a state’s population under the age of 18 (a proxy for school age children) divided by the share of the U.S. population under the age of 18. The share of a state’s population over the age of 60 (a proxy for the senior citizen population) divided by the share of the U.S. population over the age of 60. The per capita number of interstate vehicle-miles travelled in each state relative to the per capita number of vehicle-miles travelled in the U.S. The per capita interstate lane-miles in a state divided by the per capita interstate lane-miles in the U.S. The per capita share of a state’s housing stock built before 1939 divided by the per capita share of the U.S. housing stock built before 1939. Dummy - very small states (D1) Takes the value 1 for states with populations less than .25 percent of the U.S. population and 0 for all other states. Dummy - small states (D2) Takes the value 1 for states with populations between .25 percent and .50 percent of the U.S. population and 0 for all other states. The product of D1 and the TTR index. The product of D2 and the TTR index. Because the variables are expressed relative to other states, each state’s index should be compared to 1.00, the national average. Table IV.2 displays the data on each variable. For example, Rhode Island has a per capita, cost-adjusted, fiscal capacity index (TTR) of 0.95, very close to the national average. However, Rhode Island has a per capita, cost-adjusted grant allocation index of 1.24, which is 24 percent above the national average. In contrast, Florida, with a TTR index that is also close to average (0.93), has a grant allocation index of 0.76, which is 24 percent below average. Table IV.3 is a correlation matrix of the data. Multicollinearity among the possible regressors did not appear to be a serious problem. In addition, variance inflation factors that measure the degree of association between each independent variable and all the other independent variables in the model suggested that collinearity was not a problem in our sample. We first estimated the model using ordinary least squares (OLS). The results of this regression are shown in table IV.4. (–0.699) (–0.537) 0.170(2.880) 0.195(4.274) 0.108(3.449) 0.108(3.663) 0.146(2.290) 0.180(4.079) (0.713) (0.626) (0.674) (0.363) (0.950) (1.594) (0.458) (0.837) (–1.375) –0.475(–2.665) (–0.876) (–1.211) –3.306(–3.285) –3.575(–4.238) (–1.273) (–1.116) 3.603(3.768) 3.778(5.493) (1.473) (1.398) (0.242) (0.048) The model explained 86 percent of the variation in per capita grant allocations. Although the sign of the fiscal capacity variable, TTR, was negative as hypothesized, the variable was not statistically significant. According to this result, controlling for costs and a variety of need indicators, the fiscal capacity variable had no impact on per capita grant allocations to the larger states, which received 94.2 percent of the grant allocations we analyzed. Also, the dummy variable representing very small states was significant at the 99 percent confidence level; the dummy variable representing small states was not significant. Furthermore, the interaction variable for very small states was positive and significant at the 99 percent confidence level. These results suggest that a very small state with average needs and fiscal capacity would receive 30 percent higher grant funds per capita than a larger state with the same needs and fiscal capacity. They also suggest that per capita grant allocations were a positive function of fiscal capacity for the states that benefitted most from hold harmless provisions in formulas. The coefficients for lane-miles, age of housing, and minority population were positive and statistically significant, suggesting that relatively more per capita grant funds were allocated to states with greater lane-mileage, older housing stock, and higher minority populations, weighted for their different population shares. The coefficients for the other six need indicators in our model were not statistically significant. As noted previously, because we used program-specific need indicators to explain the variation in aggregate grant allocations, caution must be used in drawing conclusions about the significance or insignificance of any particular need indicator. We tested whether the variance of the error terms of our estimated equation was homoscedastic or constant by using a basic version of the White test. The results suggested that the age of housing variable was significantly associated with the error term and that we should reject the hypothesis that the variance of the error terms was constant. In technical terms, this is known as heteroscedasticity. This indicated that, while the OLS estimated coefficients were unbiased, the standard errors could be biased, making tests of the statistical significance of the coefficients imprecise. To correct for this potential bias, we re-ran the equation using independent variables that were weighted by the age of housing variable. The results of the weighted model are also shown in table IV.4. The weighted version of the model explained almost 98 percent of the variation in per capita grant allocations. In this version, the fiscal capacity indicator continued to be statistically insignificant, have a negative coefficient, and the dummy and interaction variables had essentially the same order of magnitude and significance as in the unweighted model. However, in this version, the per capita grant funds a very small state with average needs and average fiscal capacity would receive were only 20 percent higher than the funds a larger state with the same average needs and fiscal capacity would receive. From all of these results, we concluded that a state’s fiscal capacity was not an important factor in targeting most closed-ended grant funds to lower-capacity states. Moreover, we concluded that for very small states, per capita grant allocations were a positive function of fiscal capacity. In our analysis of grant targeting, we discussed how many of the formula grants we reviewed used poor proxies to measure state program needs,fiscal capacities, and cost differentials. In this appendix, we define in greater detail the three targeting factors and discuss how numerous formula grants contained measures that were poor proxies for those factors. Experts in public finance generally agree that targeted grants are designed to allocate funds according to three formula factors: Workload: A proxy for the share of a state’s population needing services relative to the national average. For example, the ratio of each state’s low-income children to its population relative to the U.S. ratio would be a possible workload factor to distribute funds from the Maternal and Child Health Services Block Grant. Fiscal Capacity: A proxy for a state’s ability to generate revenues from its own economic resources within the limits of its taxing authority. We have suggested the use of a U.S. Department of the Treasury-developed proxy, total taxable resources (TTR), because it captures all potential sources of taxes. Cost Differential: A proxy for the relative costs of providing program services in a state, such as the formula used to determine the cost of producing housing in the HOME Investment Partnerships Program. Formula grants—which comprise the vast majority of federal grant funds to states—are allocated to beneficiaries according to a mathematical statement that contains statistical measures, such as state population or per capita income. The effectiveness with which a formula grant targets funds depends on both the presence of the factors cited above and the quality of the statistical information used to measure the factors. A formula could contain measures of workload, fiscal capacity, and costs that would, in theory, target funds in the most equitable way. However, if a proxy used to measure a factor was inadequate, the distribution could still be inequitable. Numerous GAO reports on formula grant programs have found that formula factors used to allocate funds were often poor proxies for measuring communities’ needs, fiscal capacity, or costs of providing services.the needs indicators still favor more urbanized states. As a result, the oldest eligible metropolitan areas receive more generous funding, and newly emerging areas with more recent growth in AIDS cases receive less funding. The Maternal and Child Health (MCH) program was created in 1981 when 10 categorical program grants were consolidated into one block grant. Federal funding was allocated in the same proportions originally established under these 10 programs. In 1992 we reported that this method of distributing funding did not compensate states for their varying concentrations of children at risk. To distribute program funds in a more targeted manner, we recommended that the MCH formula use a state’s concentration of at-risk children as a proxy for programmatic needs. Nevertheless, the MCH formula still distributes funds according to its 1981 allocations. References to GAO reports on targeting issues can be found on the Related GAO Products list at the end of this report. each child and creating disparities in the provision of child care services for at-risk children. Furthermore, when combined with workload factors in a grant formula, a population factor may dilute a workload factor’s allocational effects. For example, the Environmental Protection Agency’s Hazardous Waste Management State Program Support program uses three workload factors in its allocation formula: (1) the number of hazardous waste management facilities in a state, (2) the amount of waste produced, and (3) state population. Although the formula allocates funds largely based on the two workload factors, the use of a population factor could reduce the allocation of funds to states with greater needs in favor of states with higher populations. Per capita personal income (PCI) is the fiscal capacity measure most commonly used in federal grant formulas. As defined and compiled by the Department of Commerce, PCI is intended to measure the income received by state residents including wages and salaries, rents, dividends, interest earnings, and income from nonresident corporate business. It also includes an adjustment for the rental value of owner-occupied housing on the ground that such ownership is analogous to the interest income earned from alternative financial investments. Nevertheless, PCI is a relatively poor choice for measuring fiscal capacity primarily because it does not comprehensively measure income. In particular, PCI fails to capture income that is produced in a state, but not realized (such as corporate retained earnings and unrealized capital gains). Furthermore, PCI ignores tax exporting. The income of nonresidents received from activities within a state is considered relevant to a state’s fiscal capacity because taxation of such income (for example, through retail sales, other excise taxes, or corporate income taxes) reduces the burdens on resident taxpayers. On both grounds, PCI is a relatively poor indicator of fiscal capacity. We previously reported that total taxable resources (TTR) is a better measure of fiscal capacity than PCI because it is a more comprehensive indicator of economic income and addresses tax exporting. TTR, developed by the U.S. Department of the Treasury, is an average of PCI and per capita gross state product (GSP). GSP measures all income produced within a state, whether received by residents, nonresidents, or retained by business corporations. By averaging GSP with PCI, the TTR measure covers more types of income than PCI alone, including income received by nonresidents. Thus, the use of a TTR-based measure of fiscal capacity would improve the targeting of program funds to states with lower fiscal capacities. The choice of fiscal capacity measure is particularly important for open-ended grant programs, such as Foster Care IV-E Program and Medicaid, which account for about 40 percent of all grant funds to state and local governments. For open-ended programs, the federal government’s share of the total program costs varies according to a state’s fiscal capacity. Currently, such reimbursement is made on the basis of a PCI-based measure called the federal medical assistance percentage (FMAP), which ranges from 50 percent for wealthier states to 80 percent for poorer states. In 1990 testimony on how fairness in the Medicaid formula could be improved, we stated that the differences in TTR and PCI were substantial.As a consequence, the federal share of Medicaid was too low in states where fiscal capacity was overstated by using PCI. Only 12 percent of the formula grants we reviewed contained a factor designed to target more funds to states with higher costs associated with providing services. However, we found that for most of those grants state expenditure data were used to allocate funds instead of a measurement of actual program cost differentials. We have reported that service costs can differ substantially from state to state, and federal grants that do not contain a cost factor purchase fewer services in the states with higher costs. We have also reported that using state expenditure data as a proxy for costs can introduce perverse incentives to an allocation formula. For example, in 1994 we found that the existing funding formula used to allocate funds to states under title III of the Older Americans Act of 1965 did not take into account the sometimes substantial differences in service costs from state to state. Because scant data existed on the actual costs of providing title III services, we recommended modifying the formula to incorporate a broad-based cost index we developed that we believed provided a reasonable proxy for title III service costs. We noted that a broad-based index was preferable to an index constructed from program expenditures because using a state’s program expenditures could have the perverse effect of rewarding the states that inefficiently administered the program. In their comments on our report, the Administration on Aging voiced its concern about using GAO’s broad-based cost index because judgment had been used to construct it. In response, we commented that we believed our methodology for developing the index was reasonable and conservative and that a similar cost measure was currently included in two other federal grant formulas. Likewise, in our report on remedial education programs we cited several problems in the use of per-pupil expenditures, the cost factor used to allocate federal education grant funds. A state’s cost may have been higher because it (1) had a greater fiscal capacity, (2) chose to procure more expensive educational instruction, or (3) gave education a relatively higher funding priority. The formula did not differentiate between the reasons for differences in average state spending. Instead, it allocated fewer funds to the states that either could not or did not spend as much on education. Adams, Robert F. “The Fiscal Response to Intergovernmental Transfers in Less Developed Areas of the United States.” Review of Economics and Statistics, Vol. XLVIII, No. 3 (August 1966), pp. 308-313. Advisory Commission on Intergovernmental Relations. “Federal Grants: Their Effects on State-Local Expenditures, Employment Levels, Wage Rates.” The Intergovernmental Grants System: An Assessment and Proposed Policies, Vol. A-61. Washington, D.C.: February 1977. Bahl, Roy W., and Robert J. Saunders. “Determinants of Changes in State and Local Expenditures.” National Tax Journal, Vol. 18 (March 1965), pp. 50-57. Bahl, Roy W. and Robert J. Saunders. “Factors Associated with Variations in State and Local Government Spending.” Journal of Finance, Vol. 21 (September 1966), pp. 523-534. Barro, Stephen M. An Econometric Study of Public School Expenditure Variations Across States, No. P-4934. Santa Monica, California: The Rand Corporation, December 1972. Bolton, Roger E. “Predictive Models for State and Local Government Purchases,” The Brookings Model: Some Further Results, J.S. Duesenberry, G. Fromm, L.R. Klein, and E. Kuh, eds. Chicago: Rand McNally & Company, 1969. Booms, Bernard H., and Teh-wei Hu. “Toward a Positive Theory of State and Local Public Expenditures: An Empirical Example.” Public Finance, Vol. XXVI (1971), pp. 419-436. Bowman, John H. “Tax Exportability, Intergovernmental Aid, and School Finance Reform.” National Tax Journal, Vol. XXVII, No. 2 (1974), pp. 163-173. Brazer, Harvey E. City Expenditures in the United States, Occasional paper 66. New York: National Bureau of Economic Research, Inc., 1959. Campbell, Alan K., and Seymour Sacks. Metropolitan America, Fiscal Patterns and Governmental Systems. New York: The Free Press, 1967. Cohn, Elchanan. “Federal and State Grants to Education: Are They Stimulative or Substitutive?” Economics of Education Review, Vol. VI, No. 4 (1987), pp. 339-344. Craig, Steven G., and Robert P. Inman. “Federal Aid and Public Education: An Empirical Look at the New Fiscal Federalism.” Review of Economics and Statistics, Vol. LXIV, No. 4 (1982), pp. 541-552. Ehrenberg, Ronald G. “The Demand for State and Local Government Employees.” American Economic Review, Vol. LXIII, No. 3 (1973), pp. 366-379. Feldstein, Martin. “The Effect of a Differential Add-on Grant: Title I and Local Education Spending.” Journal of Human Resources, Vol. XIII, No. 4 (1977), pp. 443-458. Gramlich, Edward M. “State and Local Governments and Their Budget Constraint.” International Economic Review, Vol. X, No. 2 (June 1969), pp. 163-182. Gramlich, Edward M. “State and Local Budgets the Day After It Rained: Why Is the Surplus So High?” Brookings Papers on Economic Activity. Washington, D.C.: Brookings Institution, No. 1 (1978), pp. 191-216. Gramlich, Edward M. “An Econometric Examination of the New Federalism,” Brookings Papers on Economic Activity. Washington, D.C.: Brookings Institution, No. 2 (1982), pp. 327-360. Gramlich, Edward M. “The 1991 State and Local Fiscal Crisis,” Brookings Papers on Economic Activity. Washington, D.C.: Brookings Institution, No. 2 (1991), pp. 249-287. Gramlich, Edward M., and Harvey Galper. “State and Local Fiscal Behavior and Federal Grant Policy,” Brookings Papers on Economic Activity, Arthur M. Okun and George L. Perry, eds. Washington, D.C.: Brookings Institution, No. 1 (1973), pp. 15-58. Grubb, W. Norton, and Stephan Michelson. States and Schools: The Political Economy of Public School Finance. Lexington, Massachusetts: D.C. Heath and Company, 1974. Harlow, Robert L. “Factors Affecting American State Expenditures.” Yale Economic Essays, Vol. VII, No. 2 (1967), pp. 263-308. Henderson, James M. “Local Government Expenditures: A Social Welfare Analysis.” Review of Economics and Statistics, Vol. L, No. 2 (1968), pp. 156-163. Holtz-Eakin, Douglas, and Therese J. McGuire. “State Grants-in-Aid and Municipal Government Budgets: A Case Study of New Jersey.” Research in Urban Economics, Vol. VII (1988), pp. 229-265. Horowitz, Ann R. “A Simultaneous-Equation Approach to the Problem of Explaining Interstate Differences in State and Local Government Expenditures.” Southern Economic Journal, Vol. XXXIV, No. 4 (1968), pp. 459-476. Inman, Robert P. “Towards an Econometric Model of Local Budgeting.” National Tax Association Papers and Proceedings (1971), pp. 699-719. Johnson, S.R., and P.E. Junk. “Sources of Tax Revenues and Expenditures in Large U.S. Cities.” Quarterly Review of Economics and Business, Vol. X (Winter 1970), pp. 7-16. Jondrow, James and Robert A. Levy. “The Displacement of Local Spending for Pollution Control by Federal Construction Grants.” AEA Papers and Proceedings, Vol. LXXIV, No. 2 (1984), pp. 174-178. Kurnow, Ernest. “Determinants of State and Local Expenditures Reexamined.” National Tax Journal, Vol. XVI, No. 3 (1963), pp. 252-255. Ladd, Helen F., and John Yinger. America’s Ailing Cities: Fiscal Health and the Design of Urban Policy. Baltimore, Maryland: Johns Hopkins University Press, 1989, pp. 215-284. Lindsey, Lawrence, and Richard Steinberg. Joint Crowdout: An Empirical Study of the Impact of Federal Grants on State Government Expenditures and Charitable Donations, National Bureau of Economic Research, Working Paper No. 3226, January 1990. McGuire, Martin. “A Method for Estimating the Effect of a Subsidy on the Receiver’s Resource Constraint: With an Application to U.S. Local Governments 1964-1971.” Journal of Public Economics, Vol. X (1978), pp. 25-44. Meyers, Harry G. “Displacement Effects of Federal Highway Grants.” National Tax Journal, Vol. XL, No. 2 (1987), pp. 221-235. Moffitt, Robert A. “The Effects of Grants-in-Aid on State and Local Expenditures: The Case of AFDC.” Journal of Public Economics, Vol. XXIII (1984), pp. 279-305. Moffitt, Robert A. Has State Redistribution Policy Grown More Conservative? National Bureau of Economic Research, Working Paper No. 2516, February 1988. O’Brien, Thomas. “Grants-in-Aid: Some Further Answers.” National Tax Journal, Vol. XXIV, No. 1 (1971), pp. 65-77. Ohls, James C., and Terence J. Wales. “Supply and Demand for State and Local Services.” Review of Economics and Statistics, Vol. 54, No. 4 (1972), pp. 424-430. Orr, Larry L. “Income Transfers as a Public Good: An Application to AFDC.” American Economic Review, Vol. LXVI, No. 3 (1976), pp. 359-371. Osman, Jack W. “The Dual Impact of Federal Aid on State and Local Government Expenditures.” National Tax Journal, Vol. XIX, No. 4 (1966), pp. 362-372. Phelps, Charlotte D. “Real and Monetary Determinants of State and Local Highway Investment 1951-1961.” American Economic Review, Vol. LIV (1969), pp. 507-521. Pidot, George B., Jr. “A Principal Components Analysis of the Determinants of Local Government Fiscal Patterns.” Review of Economics and Statistics, Vol. LI, No. 2 (1969), pp. 176-188. Pogue, Thomas F., and L.G. Sgontz. “The Effect of Grants-in-Aid on State-Local Spending.” National Tax Journal, Vol. XXI, No. 2 (1968), pp. 190-199. Renshaw, Edward F. “A Note on the Expenditure Effect of State Aid to Education.” Journal of Political Economy, Vol. LXVII (April 1960), pp. 170-174. Sacks, Seymour, and Robert Harris. “The Determinants of State and Local Government Expenditures and Intergovernmental Flows of Funds.” National Tax Journal, Vol. XVII, No. 1 (1964), pp. 75-85. Sharkansky, Ira. “Some More Thoughts About the Determinants of Government Expenditures.” National Tax Journal, Vol. XX, No. 2 (1967), pp. 171-179. Smith, David L. “The Response of State and Local Governments to Federal Grants.” National Tax Journal, Vol. XXI, No. 3 (1968), pp. 349-357. Stern, David. “Effects of Alternative State Aid Formulas on the Distribution of Public School Expenditures in Massachusetts.” Review of Economics and Statistics, Vol. LV, No. 1 (1973), pp. 91-97. Stine, William F. “Is Local Government Revenue Response to Federal Aid Symmetrical? Evidence from Pennsylvania County Governments in an Era of Retrenchment.” National Tax Journal, Vol. XLVII, No. 4 (1994), pp. 799-816. Stotsky, Janet G. “State Fiscal Responses to Federal Government Grants.” Growth and Change (Summer 1991), pp. 17-31. Struyk, Raymond J. “Effects of State Grants-in-Aid on Local Provision of Education and Welfare Services in New Jersey.” Journal of Regional Science, Vol. X, No. 2 (1970), pp. 225-235. Weicher, John C. “Aid, Expenditures, and Local Government Structure.” National Tax Journal, Vol. XXV, No. 4 (1972), pp. 573-583. Zampelli, Ernest M. “Resource Fungibility, the Flypaper Effect, and the Expenditure Impact of Grants-In-Aid.” Review of Economics and Statistics, Vol. LXVIII, No. 1 (1986), pp. 33-40. Alm, James. “The Optimal Structure of Intergovernmental Grants.” Public Finance Quarterly, Vol. XI, No. 4 (1983), pp. 387-417. Aronson, J. Richard, and John L. Hilley. Financing State and Local Governments. Washington, D.C.: The Brookings Institution, 1986. Barro, Stephen M. Chapter 6, “Local Fiscal Responses to Federal Policies,” The Urban Impacts of Federal Policies: Vol. 3, Fiscal Conditions. Santa Monica: Rand Corporation, April 1978, pp. 131-155. Benton, J. Edwin. “The Effects of Changes in Federal Aid on State and Local Government Spending.” Publius: The Journal of Federalism, Vol. XXII (Winter 1992), pp. 71-82. Benton, J. Edwin. “Federal Aid Cutbacks and State and Local Government Spending Policies.” Intergovernmental Relations and Public Policy, J. Edwin Benton and David R. Morgan, eds. New York: Greenwood Press, 1986, pp. 15-34. Bergstrom, Theodore C., and Robert P. Goodman, “Private Demands for Public Goods.” The American Economic Review, Vol. LXIII, No. 3 (1973), pp. 280-296. Bezdek, Roger H., and Jonathan D. Jones. “Federal Categorical Grants-In-Aid and State-Local Government Expenditures.” Public Finance, Vol. XLIII, No. 1 (1988), pp. 39-55. Borcherding, Thomas E., and Robert T. Deacon. “The Demand for the Services of Non-Federal Governments.” The American Economic Review, Vol. 62, No. 5 (1972), pp. 891-901. Bradford, David F., and Wallace E. Oates. “An Analysis of Revenue Sharing.” The Quarterly Journal of Economics, Vol. LXXXV, No. 3 (1971), pp. 416-439. Chernick, Howard A. “Price Discrimination and Federal Project Grants.” Public Finance Quarterly, Vol. IX, No. 4 (1981), pp. 371-394. Courant, Paul N., Edward M. Gramlich, and Daniel L. Rubinfeld. “The Stimulative Effects of Intergovernmental Grants: Or Why Money Sticks Where It Hits,” Fiscal Federalism and Grants-In-Aid, Peter Mieszkowski and William H. Oakland, eds. Washington, D.C.: The Urban Institute, 1979, pp. 5-21. Deacon, Robert T. “A Demand Model for the Local Public Sector.” Review of Economics and Statistics, Vol. VI, No. 2 (1978), pp. 184-192. Doolittle, Fred C. “State Legislatures and Federal Grants: Developments in the Reagan Years.” Public Budgeting & Finance (Summer 1984), pp. 3-6. Doolittle, Fred C. “State Legislatures and Federal Grants: An Overview.” Public Budgeting & Finance (Summer 1984), pp. 7-23. Filimon, Radu, Thomas Romer, and Howard Rosenthal. “Asymmetric Information and Agenda Control: The Bases of Monopoly Power in Public Spending.” Journal of Public Economics, Vol. XVII, No. 1 (1982), pp. 51-70. Fisher, Ronald C. “Income and Grant Effects on Local Expenditure: The Flypaper Effect and Other Difficulties.” Journal of Urban Economics, Vol. XII (1982), pp. 324-345. Fossett, James W. “On Confusing Caution and Greed: A Political Explanation of the Flypaper Effect.” Urban Affairs Quarterly, Vol. XXVI, No. 1 (1990), pp. 95-117. Friedman, Lee S. “Utility Maximization and Intergovernmental Grants: Analyzing Equity Consequences.” Microeconomic Policy Analysis. New York: McGraw Hill, 1984, pp. 97-139. Gabler, L.R., and Joel I. Brest. “Interstate Variations in Per Capita Highway Expenditures.” National Tax Journal, Vol. XX, No. 1 (1967), pp. 78-85. Gold, Steven D., and Ronnie Lowenstein. Federal Aid Cuts, the Balanced Budget Amendment, and Block Grants: Impacts on the States, prepared for the Annual Conference of the National Tax Association, October 1995. Gramlich, Edward M. “Reforming U.S. Federal Fiscal Arrangements,” American Domestic Priorities, John M. Quigley and Daniel L. Rubinfeld, eds. Berkeley and London: University of California Press, 1985, pp. 34-69. Gramlich, Edward M. “The Effect of Federal Grants on State-Local Expenditures: A Review of the Econometric Literature.” Papers and Proceedings of the 62nd Annual Conference of the National Tax Association (1970), pp. 569-593. Gramlich, Edward M. Chapter 7, “The Economics of Fiscal Federalism and Its Reform,” The Changing Face of Fiscal Federalism, J.E. Peck and T.R. Swartz, eds. Armonk, New York: Sharpe, 1990, pp. 152-174. Gramlich, Edward M. “Alternative Federal Policies for Stimulating State and Local Expenditures: A Comparison of Their Effects.” National Tax Journal, Vol. XXI, No. 2 (1968), pp. 119-129. Gramlich, Edward M. “Federalism and Federal Deficit Reduction.” National Tax Journal, Vol. XL, No. 3 (1987), pp. 299-313. Gramlich, Edward M. “Intergovernmental Grants: A Review of the Empirical Literature,” The Political Economy of Fiscal Federalism, Wallace E. Oates, ed. Lexington, Massachusetts: Lexington Books, 1977, pp. 219-239. Gramlich, Edward M. “Infrastructure Investment: A Review Essay.” Journal of Economic Literature, Vol. XXXII (September 1994), pp. 1176-1196. Hedge, David M. “Fiscal Dependency and the State Budget Process.” The Journal of Politics, Vol. XLV (1983), pp. 198-208. Hewitt, Daniel. “Fiscal Illusion from Grants and the Level of State and Federal Expenditures.” National Tax Journal, Vol. XXXIX, No. 4 (1986), pp. 471-483. Hines, James R. Jr., and Richard H. Thaler. “Anomalies: The Flypaper Effect.” Journal of Economic Perspectives, Vol. IX, No. 4 (1995), pp. 217-226. Huckins, Larry E., and John T. Carnevale. “Federal Grants-in-Aid: Theoretical Concerns, Design Issues, and Implementation Strategy.” Research in Urban Economics, Vol. VII (1988), pp. 41-62. Inman, Robert P. “Fiscal Allocations in a Federalist Economy: Understanding the ‘New’ Federalism.” American Domestic Priorities: An Economic Appraisal, John M. Quigley and Daniel L. Rubinfeld, eds. Berkeley and London: University of California Press, 1985, pp. 3-33. Inman, Robert P. Chapter 9, “The Fiscal Performance of Local Governments: An Interpretive Review,” Current Issues in Urban Economics, Peter Mieszkowski and Mahlon Straszheim, eds. Baltimore: The Johns Hopkins University Press, 1979, pp. 270-321. Inman, Robert P. “Federal Assistance and Local Services in the United States: The Evolution of a New Federalist Fiscal Order,” Fiscal Federalism: Quantitative Studies, Harvey Rosen, ed. Chicago: University of Chicago Press, 1988, pp. 33-77. James, Louis J. “The Stimulation and Substitution Effects of Grants-in-Aid: A General Equilibrium Analysis.” National Tax Journal, Vol. XXVI, No. 2 (1973), pp. 251-265. Luce, Thomas, and Janet Rothenberg Pack. “State Support Under the New Federalism.” Journal of Policy Analysis and Management, Vol. III, No. 3 (1984), pp. 339-358. Man, Joyce Y., and Michael E. Bell. “Federal Infrastructure Grants-in-Aid: An Ad Hoc Infrastructure Strategy.” Public Budgeting and Finance, Vol. 13, No. 3 (1993), pp. 9-22. McGuire, Martin C. “The Analysis of Federal Grants into Price and Income Components.” Fiscal Federalism and Grants-in-Aid, Peter Mieszkowski and William H. Oakland, eds. Washington, D.C.: The Urban Institute, 1979, pp. 31-50. McGuire, Martin C. “An Econometric Model of Federal Grants and Local Fiscal Response.” Financing the New Federalism: Revenue Sharing, Conditional Grants, and Taxation, Wallace E. Oates, ed. Baltimore: Johns Hopkins University Press, 1975, pp. 115-138. Miller, Edward. “The Economics of Matching Grants: The ABC Highway Program.” National Tax Journal, Vol. XXVII, No. 2 (1974), pp. 221-229. Nathan, Richard P. “State and Local Governments Under Federal Grants: Toward a Predictive Theory.” Political Science Quarterly, Vol. 98, No. 1 (Spring 1983), pp. 47-57. Nathan, Richard P., and John R. Lago. “Intergovernmental Relations in the Reagan Era.” Public Budgeting & Finance, Vol. VIII (Autumn 1988), pp. 15-29. Nathan, Richard P., and Fred C. Doolittle, and Associates. Reagan and the States. Princeton, New Jersey: Princeton University Press, 1987. Oates, Wallace E. “Lump-Sum Intergovernmental Grants Have Price Effects,” Fiscal Federalism and Grants-in-Aid, Peter Mieszkowski and William H. Oakland, eds. Washington, D.C.: The Urban Institute, 1979, pp. 23-29. Oates, Wallace E. Chapter 5, “Federalism and Government Finance,” Modern Public Finance, John M. Quigley and Eugene Smolensky, eds. Cambridge: Harvard University Press, 1994, pp. 126-161. O’Brien, J. Patrick, and Yeung-Nan Shieh. “Utility Functions and Fiscal Illusion From Grants.” National Tax Journal, Vol. XLIII, No. 2 (1990), pp. 201-205. Rafuse, Robert W., Jr. “A Strategy for Intergovernmental Fiscal Reform in the Remainder of the Eighties.” Research in Urban Economics, Vol. VII (1988), pp. 267-296. Raimondo, Henry J. Chapter 13, “Grants-in-Aid System,” Economics of State and Local Government. New York: Praeger Publishers, 1992. Rosenfeld, Raymond A. “Federal Grants and Local Capital Improvements: the Impact of Reagan Budgets.” Public Budgeting and Finance, Vol. IX, No. 1 (1989), pp. 74-84. Schwallie, Daniel P. “Measuring the Effects of Federal Grants-in-Aid on Total Public Sector Size.” Public Finance Quarterly, Vol. XVII, No. 2 (1989), pp. 185-203. Shroder, Mark. “Approximately Efficient Federal Matching Grants for Subnational Public Assistance.” National Tax Journal, Vol. XLV, No. 2 (1992), pp. 155-165. Slack, Enid. “Local Fiscal Response to Intergovernmental Transfers.” The Review of Economics and Statistics, Vol. LXII, No. 3 (1980), pp. 364-370. Sloan, Frank A. “State Discretion in Federal Categorical Programs: The Case of Medicaid.” Public Finance Quarterly, Vol. XII, No. 3 (1984), pp. 321-346. Stonecash, Jeffrey M. “State Responses to Declining Federal Support: Behavior in the Post-1978 Era.” Policy Studies Journal, Vol. XVIII (Spring 1990), pp. 755-767. Tresch, Richard W. “State Governments and the Welfare System: An Econometric Analysis.” Southern Economic Journal, Vol. XLII, No. 1 (1975), pp. 33-43. U.S. Department of the Treasury, Office of Revenue Sharing. Fiscal Impact of Revenue Sharing in Comparison with Other Federal Aid: An Evaluation of Recent Empirical Findings. Washington, D.C.: November 1978. U.S. Department of the Treasury, Office of State and Local Finance. Federal-State-Local Fiscal Relations: Report to the President and the Congress. Washington, D.C.: September 1985. Whitman, Ray D., and Robert J. Cline. Fiscal Impact of Revenue Sharing in Comparison with Other Federal Aid: An Evaluation of Recent Empirical Findings. Prepared for the Office of Revenue Sharing, U.S. Department of the Treasury. Washington, D.C.: The Urban Institute, November 3, 1978. Wilde, James A. “Grants-in-Aid: The Analytics of Design and Response.” National Tax Journal, Vol. XXIV, No. 2 (1971), pp. 143-155. Addressing the Deficit: Updating the Budgetary Implications of Selected GAO Work (GAO/OCG-96-5, June 28, 1996). Highway Funding: Alternatives for Distributing Federal Funds (GAO/RCED-96-6, November 28, 1995). Ryan White Care Act of 1990: Opportunities to Enhance Funding Equity (GAO/HEHS-96-26, November 13, 1995). Department of Labor: Senior Community Service Employment Program Delivery Could Be Improved Through Legislative and Administrative Action (GAO/HEHS-96-4, November 2, 1995). Rural Development: USDA’s Approach to Funding Water and Sewer Projects (GAO/RCED-95-258, September 22, 1995). Deficit Reduction: Opportunities to Address Long-Standing Government Performance Issues (GAO/T-OCG-95-6, September 13, 1995). Block Grants: Issues in Designing Accountability Provisions (GAO/AIMD-95-226, September 1, 1995). Addressing the Deficit: Budgetary Implications of Selected GAO Work for Fiscal Year 1996 (GAO/OCG-95-2, March 15, 1995). Block Grants: Characteristics, Experience, and Lessons Learned (GAO/HEHS-95-74, February 9, 1995). Older Americans Act: Funding Formula Could Better Reflect State Needs (GAO-HEHS-94-41, May 12, 1994). Medicaid: Alternatives for Improving the Distribution of Funds to States (GAO/HRD-93-112FS, August 20, 1993). Remedial Education: Modifying Chapter 1 Formula Would Target More Funds to Those Most in Need (GAO/HRD-92-16, July 28, 1992). Maternal and Child Health: Block Grant Funds Should Be Distributed More Equitably (GAO/HRD-92-5, April 2, 1992). Mental Health Grants: Funding Not Distributed According to State Needs (GAO/T-HRD-91-32, May 16, 1991). Medicaid Formula: Fairness Could Be Improved (GAO/T-HRD-91-5, December 7, 1990). Drug Treatment: Targeting Aid to States Using Urban Population as Indicator of Drug Use (GAO/HRD-91-17, November 27, 1990). Substance Abuse Funding: High Urban Weight Not Justified by Urban-Rural Differences in Need (GAO/T-HRD-91-38, June 25, 1991). Substance Abuse and Mental Health: Hold-harmless Provisions Prevent More Equitable Distribution of Federal Assistance Among States (GAO/T-HRD-90-3, October 30, 1989). Block Grants: Proposed Formulas for Substance Abuse, Mental Health Provide More Equity (GAO/HRD-87-109BR, July 16, 1987). Substance Abuse: Description of Proposed State Allotment Grant Formulas (GAO/HRD-86-140FS, September 10, 1986). Local Governments: Targeting General Fiscal Assistance Reduces Fiscal Disparities (GAO/HRD-86-13, July 24, 1986). Highway Funding: Federal Distribution Formulas Should Be Changed (GAO/RCED-86-114, March 31, 1986). Changing Medicaid Formula Can Improve Distribution of Funds To States (GAO-GGD-83-27, March 9, 1983). Proposed Changes in Federal Matching and Maintenance of Effort Requirements for State and Local Governments (GAO/GGD-81-7, December 23, 1980). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined the federal grant-in-aid system from the perspective of fiscal impact, focusing on the extent to which the federal grant system succeeds in: (1) encouraging states to use federal dollars to supplement rather than replace their own spending on nationally important activities; and (2) targeting grant funding to states with relatively greater programmatic needs and fewer fiscal resources. GAO found that: (1) for the most part, the federal grant system does not encourage states to use federal dollars to supplement rather than replace their own spending on nationally important activities; (2) grants are unlikely to supplement completely a state's own spending; (3) GAO's review and analysis of economists' most recent estimates of substitution suggests that every additional federal grant dollar results in less than a dollar of total additional spending on the aided activity; (4) with the responsibilities of states increasing in the federal system, some observers may view this substitution as a legitimate means of providing states fiscal relief and budgetary flexibility; (5) the Congress has various criteria available to address how such relief should be allocated among the states; (6) GAO's analysis of the extent to which the fiscal relief provided by grants is allocated to states with relatively greater programmatic needs and fewer fiscal resources indicated that federal aid is not targeted to offset these fiscal imbalances; (7) GAO's analysis also suggested that the practice of placing constraints in grant formulas to assure all states a minimum amount of funding has contributed to this lack of targeting; (8) these fiscal substitution and targeting results reflect the way in which most of the 633 federal grants GAO examined are designed; (9) a majority of the 87 largest grant programs did not include features to encourage states to use federal funds to supplement rather than replace their own spending; (10) a number of strategies for increasing the fiscal impact of grants are available to the Congress, depending on the value the Congress places on this goal relative to other grant goals and objectives; (11) grant redesign is one strategy for reducing substitution or targeting fiscal relief to states with greater fiscal stress; (12) in redesigning grants, the Congress would need to consider how best to balance grant restrictions needed to reduce substitution against possible decreases in state budgetary flexibility and discretion; (13) if states do not share the federal government's programmatic objectives, high levels of substitution may occur even after design changes; (14) alternatively, the Congress could decide that particular programs no longer represent the best use of scarce federal resources, which would free up budgetary resources that could be used to reduce the deficit or invest in more promising programs; (15) like the first strategy, grant spending cuts also involve tradeoffs; and (16) depending on the size and area of the reductions, states would incur varying degrees of budgetary stress and might face the prospect of increased state taxes, cuts in state programs, or some combination of both. |
Customer orders for stocks and options, including those from individual investors and from institutions such as mutual funds, are generally routed through a broker-dealer and executed at one of the many exchanges located in the United States. After a securities trade is executed, the ownership of the security must be transferred and payment must be exchanged between the buyer and the seller. This process is known as clearance and settlement and is performed by separate clearing organizations for stocks and for options. A depository maintains records of institutional ownership for the bulk of the securities traded in the United States. Banks also participate in the U.S. securities markets by acting as clearing banks that maintain accounts for broker-dealers to accept and make payments for these firms’ securities activities. Payments for corporate and government securities transactions, as well as for business and consumer transactions, are transferred by payment system processors, including those operated by the Board of Governors of the Federal Reserve (Federal Reserve) and private organizations. Virtually all of the information processed is transferred between parties via telecommunications systems; and as a result, the securities markets depend heavily on its supporting telecommunications infrastructure. Although thousands of entities are active in the U.S. securities markets, certain key participants are critical to the ability of the markets to function. Some are more important than others because they offer unique products or perform vital services. For example, markets cannot function without the activities performed by clearing organizations; and in some cases, only one clearing organization exists for particular products. In addition, other market participants are critical to the overall market functioning because they consolidate and distribute price quotations or information on executed trades. Other participants may be critical to the overall functioning of the markets only in the aggregate. For example, if one of the thousands of broker-dealers in the United States is unable to operate, its customers may be inconvenienced or unable to trade, but the impact on the markets as a whole may just be a lower level of liquidity or reduced price competitiveness. However, a small number of large broker-dealers account for sizeable portions of the daily trading volume on many exchanges. If several of these large firms were unable or unwilling to operate, the markets might not have sufficient trading volume to function in an orderly or fair way. Several federal organizations oversee the various securities market participants. SEC regulates the stock and options exchanges and the clearing organizations for those products. In addition, SEC regulates the broker-dealers that trade on those markets and other participants, such as mutual funds, which are active investors. The exchanges also have responsibilities as self-regulatory organizations for ensuring that their participants comply with the securities laws and the exchanges’ own rules. SEC or one of the depository institution regulators oversees participants in the government securities market, but the Department of the Treasury (Treasury) also plays a role. Treasury issues rules pertaining to securities market, but SEC or the bank regulators are responsible for conducting examinations to ensure that these rules are followed. Additionally, several federal organizations have regulatory responsibilities over banks and other depository institutions, including those active in the securities markets. The Federal Reserve oversees bank holding companies and state-chartered banks that are members of the Federal Reserve System. The Office of the Comptroller of the Currency (OCC) examines nationally chartered banks. Critical organizations and other trading and clearing firms improved their readiness for future terrorist attacks or other disasters in several ways, but some still remained at greater risk of disruption than others. For example, since our 2003 report, all of the seven critical organizations we reviewed reduced risks by adding physical barriers around their facilities, enhancing protection from hackers, or establishing geographically diverse backup facilities. However, several organizations still faced an increased risk of disruption from potential future attacks, either because of the location of their backup facilities or because they have not taken steps to better ensure the availability of critical staff. The key broker-dealers and banks that conduct significant trading and clearing activities that we reviewed had also improved their business continuity capabilities, but some were still at greater risk of disruption than others due to the concentration of key trading staff in single locations. Working together through industry associations, market participants also improved their ability to withstand future disasters by, for example, establishing crisis command centers. Since our previous report, almost all of the critical organizations took steps to improve their physical and electronic security. Physical security encompasses measures such as installing physical barriers around buildings, screening people and objects, and using employee and visitor identification systems. We assessed the organizations’ physical security using standards and best practices developed by the Department of Justice. For example, as a deterrent to potential attacks, one organization increased the number of armed security officers that protect the perimeter of its facility. These security personnel are also now clad in military-style uniforms and possess greater firepower than they did previously. In addition, this organization installed additional video cameras to allow it to monitor more locations around its facility. Another organization we reviewed had installed new perimeter barriers and X-ray equipment outside of its facility to better protect its lobby and other interior spaces. Four of the critical organizations we reviewed still faced increased risks in their physical security, such as an inability to control vehicular traffic around their primary facility, which put them at greater risk of disruption from potential physical attacks than other organizations. However, each of these four organizations also had geographically diverse backup facilities capable of conducting some or all of the organization’s critical operations, mitigating the effect of a disruption at the primary facility. All seven organizations had also implemented countermeasures to mitigate chemical, biological, and radiological (CBR) threats. For example, each organization had identified its facilities’ outdoor air intakes, which can be highly vulnerable to CBR attacks, and took steps to prevent access to them. Such steps included installing locks, video cameras, security lighting, and intrusion detection sensors in order to establish a security zone around the air intakes. The organizations also took actions to prevent public or unauthorized access to areas that provide access to centralized mechanical systems, including heating, ventilation, and air conditioning equipment. Finally, some organizations also isolated their lobbies, mail processing areas, and loading docks. An effective physical security program includes periodic testing of controls such as reviews of security guard performance outside of normal business hours, attempts to bring in prohibited items (such as weapons), and review of employees’ use of access to restricted and sensitive areas. Periodic monitoring of such controls not only provides a valuable means of identifying areas of noncompliance or previously undetected vulnerabilities, but can also serve to remind employees of their security responsibilities and demonstrate management’s commitment to security. Each of the organizations we visited performed these types of tests on a periodic basis. The critical organizations also continued to invest in information security measures to reduce the risk that their operations would be disrupted by electronic attacks. Electronic attacks can come in different forms and include attacks in which persons (such as hackers) attempt to gain unauthorized access to a specific organization or system or attacks by computer programs or codes, such as viruses or worms. We applied criteria from the Federal Information System Controls Audit Manual, as well as other federal guidelines and industry best practices, to assess the organizations’ information security. For more information on the scope of our assessment, please see appendix I. All of the organizations we reviewed enhanced protections against unauthorized outside access to their computer systems. For example, one organization increased the coverage of its intrusion detection and prevention systems to better monitor and address attacks by outsiders. Some of the organizations we reviewed also had invested in more secure technologies. For example, one organization put in place a new multitiered external network, which provides multiple layers of security. During our reviews, we also identified and discussed with these organizations additional actions they could take to further improve their information security. All the critical organizations had also further increased their ability to recover from attacks or other disasters since our 2003 report, but some still had limitations in their business continuity capabilities that increased their risk of disruption. Since our report, these organizations also have more specific standards against which to measure their capabilities because federal financial regulators have issued business continuity guidelines and principles that set expectations for these organizations. These regulatory guidelines direct the organizations to establish geographically diverse backup capabilities and state that the operation of a backup site should not be impaired by a wide-scale evacuation at the primary site or the inaccessibility of the staff. Although the guidance does not specify a minimum distance between primary and backup facilities, regulators state that such facilities should not rely on the same infrastructure components, such as transportation, telecommunications, water supply, and power supply. As of May 2004, four of the seven critical organizations had geographically dispersed backup sites that their officials indicated were capable of conducting the organizations’ critical operations. Each backup site was located at a considerable distance from the organizations’ primary sites—ranging from almost 300 miles to over 1,100 miles. However, as of June 2004, the remaining three critical organizations that we noted in our previous report as lacking geographic separation between their primary and backup facilities did not have geographically diverse backup facilities capable of assuming all critical operations. Instead, these three organizations’ current backup facilities were located within the same geographic area as their primary sites (although, as discussed below, one organization had a geographically diverse facility that it could use to run some of its critical applications). Officials at one organization said that these facilities do not depend on the same infrastructure components as their primary facilities; although, in some cases, they would depend on the same transportation system. Although having backup sites does reduce the risk that these organizations’ operations would be disrupted in future attacks, both primary and backup facilities could be affected by wide-scale events, and thus, these organizations faced an increased level of risk of operational disruptions. However, officials at the three critical organizations that lacked geographically dispersed backup sites were reducing the risks resulting from the proximity of their primary and backup facilities. One organization established a geographically diverse backup site, and as of June 2004, had the ability to run some of its critical operations from that site. Officials at this organization anticipated being able to conduct all of its critical operations from the new site by the end of 2005. To reduce the risk arising from certain types of events, the other two organizations had begun work to establish management systems that would allow them to operate the hardware and systems at their primary sites from geographically remote locations. Federal financial regulators have stated that having a backup site that is fully capable of operating all critical functions is necessary for organizations to ensure that they can meet regulators’ recovery objectives. (We discuss recovery objectives more fully later in this report.) However, these organizations’ remote management capabilities, which both intended to have in place by the end of 2004, would allow them to continue operating under disaster scenarios in which their facilities were not damaged but were rendered physically inaccessible for public safety or other reasons. As of August 2004, one of these two organizations had a plan to implement a geographically diverse backup site by April 2005. The other organization was considering alternatives for being able to recover its operations in geographically dispersed locations but had not developed any definite plans. Additionally, at the time we conducted this review, six of the seven organizations had arrangements in place that appear to ensure the availability of critical staff. Organizations also can enhance business continuity capabilities following a disaster by implementing plans to ensure the availability of key staff, if staff who perform critical activities at a primary facility become incapacitated. For example, one organization rotated its critical staff among multiple locations, ensuring that all such staff were never in the same location at the same time. However, one of the seven organizations had not developed a formal plan for ensuring the availability of key staff. Officials at this organization said they believed that a sufficient number staff necessary to conduct critical operations were not at the primary facility at any one time for a variety of reasons, including vacations and business travel. However, they had no formal plan to ensure that sufficient numbers of trained staff would be available should staff at the primary facility be lost. In July 2004, officials from this organization said they were seeking to have such a plan in place in the near future. This particular organization already has faced an increased risk of disruption because it was also one of the three organizations that did not yet have a geographically diverse backup facility. While this organization had improved its physical security, which can help protect an organization’s primary facility as well as its critical staff, it was still at greater risk of disruption than other critical organizations. Further, all seven organizations that we reviewed appeared to be following sound practices for ensuring the continuity and recoverability of their critical telecommunications services. Business continuity guidelines identify five telecommunications-related practices that organizations can follow to improve the continuity of their critical telecommunications services: developing and maintaining an inventory of existing telecommunications services, identifying those services critical to continued operations, identifying the risks to those services, developing strategies and solutions to mitigate those risks, and testing those risk mitigation and continuity strategies. Specifically, the critical organizations we reviewed inventoried their voice and data telecommunications services and identified those services critical to their operations. The organizations also took actions to identify and mitigate their respective risks. For example, to mitigate the risk that a single failure point in their internal networks might disrupt their operations, all organizations linked their facilities to public networks at two diverse points on their premises and distributed those connections throughout their facilities through redundant cabling. To limit their exposure to disruptions in public network facilities, some organizations also subscribed to services that linked their facilities to the public network at multiple points and also linked them to services that would reroute their connections around failure points that might occur in the public networks. To improve service recoverability, six of the seven organizations were also taking advantage of a federal telecommunications priority program that would provide increased priority for restoration of the key telecommunications circuits in their inventories in the event of a disruption. These critical organizations were also testing their own abilities to recover their communications operations during a disaster and to communicate with key customers and organizations. Further, within their overall continuity strategies, most critical organizations were either establishing or continuing to operate out-of-region telecommunications facilities that would, among other things, reduce the risk that a failure in local telecommunications services at any one location would pose a risk to their continuing operations. Finally, given that most organizations had limited resources, effectively managing operations risks involved balancing additional protections for facilities, personnel, and systems with enhancing business continuity capabilities. As part of this process, organizations take into consideration that enhancing capabilities in one area can help mitigate vulnerabilities in another area. For example, as noted previously, four of the critical organizations we reviewed had weaknesses in their physical security but also had geographically diverse backup facilities capable of conducting some or all of the organization’s critical operations, mitigating the effect of a disruption at the primary facility. That is, if a physical security weakness allowed a disruption to occur at the organization’s primary facility, operations could be transferred to a backup facility. Similarly, one organization that had not yet implemented a geographically diverse backup facility had made significant improvements to the physical security protections in place at its primary facility, which can help reduce the likelihood of that facility becoming incapacitated by potential physical attacks. The trading firms with whom we spoke—eight trading firms, including five large broker-dealers and three banks whose activities represent a significant portion of the total trading and clearing volume on U.S. markets—also took steps to improve their recovery capabilities, but some still faced increased risk of disruption. The smooth functioning of U.S. securities markets also depends on the ability of trading firms to conduct trading and clear and settle their transactions. In our 2003 report, we noted that because of the considerable efforts required for broker-dealers to restore operations, insufficient liquidity existed to open the markets during the week of the September 2001 attacks. For example, several large broker-dealers had not invested in backup facilities and had to recreate their trading operations at new locations; others needed to improve their business continuity capabilities for telecommunications. All of the firms we spoke with during this review said they had backup data centers capable of running critical applications and also had alternate locations out of which key staff could operate if the primary facilities should become unusable. For example, to address the potential for a region-wide disruption in New York City, one firm was developing a geographically diverse backup center. Another firm improved its ability to ensure the availability of critical staff by dividing key technical and business staff between two separate locations. All of the firms also took steps to improve their ability to retain telecommunications capabilities in the event of a disruption. For example, all five of the broker-dealers with whom we spoke had begun using the Secure Financial Transaction Infrastructure, a private telecommunications network linking financial market participants. Four of the broker-dealers and all three of the banks also said they were required to meet federal regulatory goals for the recovery of their clearing and settlement operations and that they were taking steps that would allow them to meet those goals within the recommended time frames. However, four of these firms were at greater risk of a disruption to their trading operations than other firms because of the concentration of key trading staff in a single location at the same time. Each of these firms did have alternate locations out of which key trading staff could work, which would allow them to recover their trading activities if their primary site were damaged or inaccessible. However, officials at these firms said that if the trading staff at the primary site were incapacitated, they would either not be able to resume trading quickly enough to meet regulators’ goal of recovering trading activity on a next-day basis, or if able to resume trading, they would not be able to trade at normal capacity. For example, officials at two firms said that if they were to lose their trading operations staff, it would likely take several weeks to reconstitute their trading operations, even using staff from other locations. Officials at one of these firms said that replacing highly skilled trading staff with inexperienced staff could put the firm’s capital at risk and that while they might eventually reconstitute their trading operations, they would likely exit the market for an indefinite period of time. Although officials at both of these firms said they recognized that they faced increased risk, they said at this point, the decreased efficiency and increased costs that would be associated with splitting or rotating these staff were viewed as too great, compared with the potential risk of disruption. In addition to taking actions individually, securities market participants also have worked jointly to improve the readiness of the financial sector for potential future attacks. One of the weaknesses we noted in our 2003 report was that some organizations had not completely tested their business continuity capabilities, and some also lacked sufficient connectivity to the backup sites of other organizations. To increase the industry’s overall readiness, the Securities Industry Association (SIA), which represents over 600 of the broker-dealers active in U.S. markets, has been coordinating an industry-wide testing project since September 2002. The first phase of the project had broker-dealers testing connections from their backup facilities to the core clearing and settlement organizations and correctly sending and receiving information. The second phase of the project will involve broker-dealers, exchanges, and other securities market participants in exercises that will simulate regional power and telecommunications outages. During the exercises, participants will be expected to conduct critical operations from an alternative location as well as test connectivity and communications capabilities. Although testing took longer than originally envisioned, SIA substantially completed the first phase by June 2004. According to SIA officials, smaller firms that are not testing as quickly as others contributed to the delay. Also according to SIA staff, the more than 110 firms that completed at least part of the first phase of testing represented over 80 percent of broker-dealer trading activity, and nearly all of the 25 largest firms have completed most or all parts of this testing. Further, SIA conducted a disaster simulation exercise—involving key industry participants as well as SEC—in May 2004 to help better prepare for the second phase of testing, which was scheduled to begin in the third quarter of 2004. To address another concern revealed by the 2001 attacks, securities market associations established crisis command centers or other coordination procedures. Just after the September 2001 attacks, some market participants encountered difficulties in communicating and coordinating with other market participants, regulators, and governmental bodies that responded to the disaster. More specifically, to coordinate the industry’s response and the dissemination of information during a crisis, in June 2002 SIA created a crisis command center. SIA also placed a representative at the New York City Office of Emergency Management, an office that acts as an interagency coordinator in partnership with local, state, federal, and private entities to provide comprehensive emergency response, hazard planning and disaster mitigation to New York City. According to SIA officials, they activated the SIA command center during the August 2003 blackout and during Hurricane Isabel in September 2003, allowing them to test and validate the functioning of the command center. In addition, the trade association that represents firms active in bond trading, the Bond Market Association, also took action to improve its members’ response to future crises. According to organization officials, this association created a structure for coordinating the response of participants in the fixed-income securities markets. The association would communicate with its members through one of its standing committees regarding the condition of the fixed-income securities markets and the potential opening and closing of those markets. In addition, the association’s committee would share information and coordinate its actions with the SIA command center. Finally, information regarding business continuity practices and potential threats to the industry has been shared with market participants. For example, SIA collected and distributed business continuity best practices to its members, established subcommittees to study business continuity-related issues, and conducted conferences to share and foster discussion of these issues in the securities industry. Also, Treasury designated another organization, the Financial Services Sector Coordinating Council (which comprises representatives from private firms in the financial industry) as the private-sector coordinator for critical infrastructure protection for the banking and finance sector. In particular this council, along with SIA and the American Bankers Association, has supported and promoted use by the financial sector of the Financial Services Information Sharing and Analysis Center (FS/ISAC), a mechanism to gather, analyze, and share information on threats, incidents, and vulnerabilities faced by the financial sector. The council also has been participating in educational and outreach efforts in conjunction with the Financial and Banking Information Infrastructure Committee, which coordinates critical infrastructure protection among federal financial regulators. The September 2001 terrorist attacks highlighted the critical importance of resilient telecommunications services for the continued operation of U.S. financial markets. The resulting damage disrupted telecommunications service to thousands of businesses and residences, and some firms learned that their services were not as robust as they believed prior to that event. Since 2001 terrorist attacks, telecommunications groups and carriers and financial market participants have worked to improve the resiliency and the recoverability of telecommunications services in the event of future disruptions. As we described in our 2003 report, the 2001 terrorist attacks resulted in significant damage to telecommunications facilities, lines, and equipment. The loss of telecommunications service as well as damage to power and transportation infrastructure delayed the reopening of the markets. Much of the disruption to voice and data communications services throughout lower Manhattan—including the financial district—occurred when one of the buildings in the World Trade Center complex collapsed into an adjacent Verizon communications center at 140 West Street, which served as a major local communications hub within the public network. Approximately 34,000 businesses and residences in the surrounding area lost services. The loss of this facility also resulted in disruptions to customers in other service areas because other telecommunications carriers had equipment colocated in 140 West Street that linked their networks to Verizon and considerable amounts of telecommunications traffic that originated and terminated in other areas also passed through this location. AT&T’s local network service in lower Manhattan was also significantly disrupted following the attacks. The attacks also highlighted the difficulties of ensuring that the telecommunications services required to support critical financial market operations could withstand the effects of network disruptions. One of the primary ways that users of telecommunications services try to ensure that their services will not be disrupted is to use diverse telecommunications facilities to support their needs, including diversely routed lines and circuits. These steps are necessary to ensure that damage to any single point in one communications path does not cause all services to fail. However, ensuring that telecommunication service carriers actually maintain diverse telecommunications services is a long-standing financial industry concern. For example, a December 1997 report prepared by the President’s National Security Telecommunications Advisory Committee (NSTAC) noted, “despite assurances about diverse networks from the carriers, a consistent concern among the financial services industry was the trustworthiness of their telecommunications diversity arrangements.” The ongoing operation and maintenance of network facilities can itself pose a challenge to ensuring diversity of services. To improve the reliability and efficiency of their networks, telecommunications carriers can change the physical network facilities they use to route circuits in a process they call “grooming.” This process can result in a loss of diversity over time, however, if diverse services are rerouted onto or through the same facilities. For example, as our 2003 report noted, many financial firms that thought they had achieved telecommunications service diversity still experienced service disruptions as a result of the September 2001 attacks. Some of these firms indicated that although they were assured that their communications circuits flowed through physically diverse paths, at the time they first acquired those services, their service providers rerouted some circuits over time without their knowledge, eliminating the assurance of diversity and leaving the firms more vulnerable to disruption. However, an NSTAC 2004 report noted that carriers would have to follow labor-intensive, manual processes to ensure route diversity and monitor that condition on an ongoing basis. NSTAC also reported that guaranteeing that circuit routes would not be changed could actually make an organization’s service less reliable because their circuits could lose the benefit of networking technologies that automatically reroute circuits in the event of facility failures. Responding to the challenges of maintaining diversity, one financial market participant has acted to improve the resiliency of the telecommunications services supporting the financial industry. In January 2003, the Securities Industry Automation Corporation (SIAC) began operating its own private network, known as the Secure Financial Transaction Infrastructure (SFTI), to provide more reliable and “survivable” private communications services linking exchanges, clearing organizations, and other financial market participants. The information that travels on this network includes orders to buy and sell stocks on the New York and American stock exchanges as well as information needed to clear and settle these transactions. SFTI was designed to overcome several of the challenges in attaining continual resiliency in telecommunications services. For example, to ensure redundancy and eliminate single points of failure, SFTI employs redundant equipment throughout, and carries data traffic over redundant fiber-optic rings whose routes are geographically and physically diverse. To access the network, users are required to connect to two or more of the eight SFTI access nodes located in Boston, Chicago, and the New York City metropolitan area. Therefore, if service is disrupted at one access node, service can still be obtained through an alternate node. Further, users can access SFTI in various ways, including obtaining a direct connection to the SFTI access nodes or connecting to one of four financial “extranet” service providers that operate their own telecommunications networks and also link to the SFTI access nodes. Some customers may choose to use a combination of both approaches. To further enhance diversity throughout this private network, SIAC has contracted for auditable route diversity for the SFTI network. Because SIAC manages all SFTI facilities, it can also control all the grooming that takes place among the lines within the New York regional segment of this network. In addition, SIAC established a remote out-of-region network operations center that can manage network operations in the event of any disruption to its own New York area-based operations. The financial industry has responded positively to SFTI since its implementation. For example, according to SIAC, financial industry associations, including SIA, the Bond Market Association, and the Investment Company Institute, which represents mutual funds, have all supported use of SFTI for their respective members. Moreover, NYSE, the American Stock Exchange, and the Consolidated Tape Authority, which oversees the systems that distribute stock quotes and completed trade information for the stock exchanges, expect that all of their participating member firms will be using SFTI to connect to its trading services, as of December 2004. As of June 2004, SIAC has signed up more than 600 customers for this network. Federal and local government entities have also taken steps to help the financial industry in preparing for and recovering from possible future disruptions to the telecommunications infrastructure. First, two presidential advisory committees have taken steps that may enhance the security and continuity of telecommunications services supporting the financial industry. The National Reliability and Interoperability Council (NRIC), which is a group of telecommunications carrier executives that advises the Federal Communications Commission, has identified existing and new best practices that, if implemented, could help carriers improve the security of their facilities, and improve recovery of services after attack or disruptions. NRIC addressed such matters as business continuity planning, physical security, emergency operations and response, and other operational procedures. Further, NSTAC, which had also studied diversity issues, recommended that the federal government support research and development activities on resiliency, diversity, and alternative technologies. Additionally, the federal government sought to increase financial industry participation in federal programs that could enhance the recoverability of disrupted services. Specifically, the Department of Homeland Security’s (DHS) National Communications System (NCS) promoted participation in its Telecommunications Service Priority (TSP) program. TSP allows financial market participants to register their key telecommunications circuits for priority restoration in the event of a crisis. Financial market participants are sponsored for registration in this program by their regulatory agency. According to NCS officials, the financial industry has made greater use of the TSP program, as there are now about 4,100 financial organization circuits registered in TSP for priority restoration; more than 3,500 of those were registered since June 2002. Further, to improve the recoverability of SFTI, the Federal Reserve worked with SIAC to ensure that all SFTI access lines were registered for TSP priority restoration as those circuits were installed. Federal financial regulators also have been working with carriers to more closely examine the diversity challenge and identify potential management solutions. In a recently initiated pilot project, the Federal Reserve has been working with the Alliance for Telecommunications Industry Solutions to examine the diversity of circuits supporting Federal Reserve networks. The project’s goal is to develop an efficient, affordable way to document and maintain routing diversity using those circuits as a baseline. According to Federal Reserve and Treasury officials, this exercise could yield a model approach for achieving assured diversity, improve the processes required to do so, and provide a better understanding of the associated costs. Finally, New York City officials have enhanced their ability to monitor and coordinate infrastructure recovery efforts with local carriers. City officials recently revised their Mutual Aid Restoration Consortium (MARC) agreement, which governs monitoring and coordination of restoration actions between telecommunications carriers and city officials in the event of service outages. City officials invoked this agreement in the aftermath of the September 2001 attacks to ensure that essential city government offices and operations would have adequate telecommunications service and to aid coordination of infrastructure recovery efforts by carriers operating in the city. More recently, the MARC agreement proved effective during the August 2003 blackout, in which teleconferences were used to identify and communicate urgent diesel fuel needs of carriers and to coordinate other critical assistance to share power generators and network facilities. Lessons learned from such incidents have been addressed in the revised MARC agreement. Telecommunications carriers are also acting to improve the resiliency of their networks. First, those carriers rebuilding facilities that were damaged or lost in the attacks have been replacing these facilities with designs that provide greater diversity to their infrastructure in lower Manhattan. For example, to avoid single points of failure in its network, Verizon redesigned its network to minimize circuits that only pass through a switching facility on their way to other termination points. This should reduce the potential for service in one area to be lost when damage occurs to facilities in other areas. In addition, Verizon has also used more resilient and physically diverse fiber optic systems within lower Manhattan, which also may provide alternate network access capabilities at strategic locations. Similarly, as part of its own restoration effort, AT&T officials said they rebuilt two central office facilities at more geographically diverse locations and upgraded their fiber-optic networks. Telecommunication carriers also reported that they were improving their own business continuity plans to better ensure their ability to recover after a disaster. For example, officials at both Verizon and MCI said they had reexamined their continuity plans and developed new recovery strategies to improve their continuity capabilities. In addition, officials at AT&T informed us that they were continuing to conduct quarterly network disaster recovery tests at different locations throughout the United States that simulate the recovery of damaged switching facilities. Finally, telecommunications carriers have tried to increase telecommunications resiliency by offering additional services to their customers, including financial market participants. As we described in our 2003 report, carriers offer various services that can improve the reliability and recoverability of existing telecommunications. For example, carriers offer fiber-optic networks to provide more reliable access to public networks; services to redirect their switched telecommunications services, such as voice calls, to another business location; and alterative network connectivity solutions such as high bandwidth, point-to-point radio connectivity to another location or network node. Since our 2003 report, federal financial regulators, including SEC, have identified vulnerabilities, participated in tests and exercises, and developed recovery goals and business continuity guidelines to improve the preparedness of securities markets for terrorist attacks and other disasters. For example, banking and securities regulators have issued joint guidance providing recovery goals for market participants that perform critical clearance and settlement activities. Partly in response to a recommendation in our 2003 report, SEC also has issued guidance providing goals for trading activities to resume on securities exchanges. However, SEC has not developed a complete assessment of securities markets readiness to resume trading after major disruptions, which increases the risk that the reopening of the markets could be delayed. Since our 2003 report, federal financial regulators have participated in exercises that assess readiness for potential disasters. For example, Treasury, the Federal Reserve, SEC, and the Commodity Futures Trading Commission have taken part in several disaster recovery exercises sponsored by DHS, including the TOPOFF exercises, which simulated physical attacks, and the Livewire exercise, which simulated a cyber attack. In addition, as part of the Financial and Banking Information Infrastructure Committee, the federal financial regulators have conducted an analysis of financial sector vulnerabilities, including those involving dependencies on other critical infrastructures, such as telecommunications and power. Financial regulators have also been involved in various information sharing efforts. For example, Treasury has also supported and promoted the FS/ISAC, which as described earlier gathers, analyzes, and shares information on threats, incidents, and vulnerabilities faced by the financial sector. In 2004, Treasury provided additional funding to FS/ISAC to allow it, among other things, to expand its membership and services to even the smallest financial institutions, such as community banks. Treasury has also been involved, along with the Federal Deposit Insurance Corporation, in conducting educational outreach events in various cities on sound business continuity practices. Treasury is also working with DHS to continue developing “Chicago First,” an emergency preparedness program designed to coordinate activities among financial sector participants and federal, state, and local government officials. Treasury is promoting this program as a model for other cities to implement. Banking and securities regulators have also taken steps since our 2003 report to assess the efforts of banks and securities firms to withstand and recover from disasters. For instance, in March 2003 the Federal Financial Institutions Examination Council (FFIEC), which issues joint regulatory and examination guidance used by financial regulators in overseeing financial institution such as banks and credit unions, issued a Business Continuity Planning Booklet that provided updated guidance and examination procedures on this topic. In the booklet, FFIEC requires depository institutions to develop business continuity plans that will effectively minimize service disruptions and financial loss, test the plans at least annually, and subject the plans to independent audit and review. In addition, it asks institutions to consider in their planning the potential for wide-area disasters and the resulting loss or inaccessibility of staff, as well as the extent to which their institution is dependent upon other financial system participants and service providers. According to one financial regulator responsible for conducting examinations based on these guidelines, an informal analysis showed that larger financial institutions were doing better than smaller ones in meeting the guidelines. As a result, officials at that regulator said they had begun developing guidance to help smaller institutions better meet the business continuity guidelines. SEC has also conducted examinations of broker-dealers that included reviews of information security and business continuity efforts. For example, SEC’s Office of Compliance Inspections and Examinations (OCIE) administers SEC’s inspection program for broker-dealers, including monitoring broker-dealers’ compliance with Regulation SP, which deals with the privacy of consumer financial information. As part of their review of broker-dealers’ ability to protect consumer information, OCIE staff review those organizations’ information security capabilities. In addition, since our 2003 report, OCIE has begun incorporating into its broker-dealer examinations the business continuity practices presented by federal financial regulators in an interagency paper (described in the following paragraph). Federal financial regulators also have jointly focused on continuity issues to reduce the risk of disruption for the financial markets from terrorist attacks or other disasters. In April of 2003, securities and banking regulators issued the Interagency Paper on Sound Practices to Strengthen the Resilience of the U.S. Financial System. Issued by SEC, the Federal Reserve, and the OCC, this interagency paper identifies business continuity practices that core clearing and settlement organizations and firms that play a significant clearing or settlement role in critical financial markets are expected to follow. Core organizations include clearing organizations responsible for securities and other financial products and payment system processors. In addition to these organizations, the interagency paper also applies to financial institutions, including banks and broker-dealers, which conduct significant amounts of trading and clearing activities. If these firms were unable clear and settle the outstanding trades that they or their customers conducted, they could create payment problems for other participants in the markets. By proposing that these organizations and firms follow the practices identified in the interagency paper, regulators expect to minimize the immediate systemic effects of a wide-scale disruption—by setting goals for key payment and settlement systems to resume operation promptly following a wide-scale disaster, and for major participants in those systems to recover sufficiently to complete pending transactions. In the interagency paper, the regulators outline various practices for organizations and firms to follow and set goals related to resumption of their clearing and settlement activities. First, these organizations and firms are expected to identify the clearing and settlement activities that they perform in support of critical financial markets. They are also expected to determine appropriate recovery and resumption objectives for those activities. The regulators state that, at minimum, the organizations and firms are expected to be able to recover within the same business day. To realistically achieve this, the regulators expect that these organizations and firms would maintain geographically dispersed resources to meet their recovery and resumption objectives. Specifically to be consistent with best practices, backup facilities for clearing functions should be as far away from the primary facility as necessary to avoid being subject to the same set of risks as the primary facility. The backup facilities also should not rely on the same infrastructure—such as power and telecommunications—as the primary facility, and the operation of the backup facility should not be impaired by a wide-scale evacuation at, or the inaccessibility of staff that service, the primary site. In addition, the regulators expect that the organizations and firms would engage in routine use or testing of their recovery and resumption arrangements. The regulators also included deadlines for achieving continuity goals in the interagency paper. For example, core clearing and settlement organizations are expected to implement the practices the paper advocates, by the end of 2004. Significant banks and broker-dealers are expected to have implemented such practices by April 2006. According to banking and securities regulatory officials, they are monitoring the progress that organizations and firms are making in meeting these deadlines. SEC also has provided recovery goals and business continuity best practices to exchanges and ECNs that conduct securities trading in the United States. In our 2003 report, we recommended that SEC work with the industry to develop such goals and sound business continuity practices and identify organizations that should follow them. In September 2003, SEC issued a policy statement that establishes business continuity principles to be followed by the organizations that execute trades in securities, including the NYSE, the Nasdaq Stock Market, Inc. (NASDAQ), the regional stock exchanges, the options exchanges, and ECNs, which match buy and sell orders for securities. The business continuity principles SEC published include establishing a business continuity plan that anticipates the resumption of trading no later than the next business day following a wide-scale disruption; maintaining geographic diversity between primary and backup sites; ensuring the full resiliency of important shared information systems, such as market data collection and dissemination systems; and testing the effectiveness of backup arrangements in recovering from wide-scale disruptions. SEC expects the securities markets and ECNs to implement business continuity plans reflecting these principles, no later than the end of 2004. According to SEC staff, they are monitoring the progress of the exchanges and ECNs in implementing the policy statement through their examinations of these organizations. In addition to establishing recovery goals, SEC has taken additional actions to ensure that sufficient venues for trading would likely be available after a major disaster. As we noted in our 2003 report, SEC staff have asked NYSE and NASDAQ to be prepared to trade the other’s securities should one trading floor go down. Officials at both of these markets said they have made the necessary system changes and have tested their members’ ability to trade the other markets’ securities. SEC officials said that they assessed had the ability of these two organizations to provide such backup and were confident that these markets had the necessary capacity and systems to do so. If neither NYSE nor NASDAQ is able to resume trading, ECNs and regional exchanges would have to assume the trading of the stocks that are normally traded on those markets. SEC staff said that, based on discussions with ECN officials and information obtained from inspections of these entities, collectively, the ECNs and regional exchanges have sufficient capacity to take on significant additional amounts of trading volume that would result from such an event. Although none of the organizations involved—NYSE, NASDAQ, ECNs, and regional exchanges—are required to assume such additional trading activity, according to SEC staff these organizations all have a strong business incentive and competitive motivation to do so. Finally, SEC approved business continuity goals for the broker-dealers that conduct trading in U.S. securities markets. In April 2004, SEC approved essentially identical rules from NASD and NYSE that require their members to develop business continuity plans. According to these rules, the broker-dealer members of these organizations must develop business continuity plans that address various elements, including data backup and recovery, alternate means of communication with customers, alternate physical locations for employees, and consideration of the impacts to critical customers and counterparties. These rules do not require trading firms to actually have plans to resume operating or trading activities after a disaster. Instead, if a disaster occured and broker-dealers were unable to continue operating, the rules require broker-dealers to develop procedures to ensure that they promptly could provide customers with access to their funds and securities if the broker-dealers were unable to continue business operations. These rules appear to respond to our 2003 recommendation that SEC work with the securities industry to develop business continuity guidelines that, at a minimum, require broker-dealers to allow customers to readily access their cash and securities. NYSE expected its members to implement its rule by August 5, 2004, and NASD expected implementation by September 10, 2004. Although the actions securities and banking regulators have taken will likely improve the preparedness of the securities markets to withstand future disruptions, SEC has not conducted the comprehensive assessments that would allow it to better ensure that trading in the securities markets could promptly resume following a wide-scale disaster. Preparing for trading activities to resume in a smooth and timely manner would appear to be a regulatory goal for SEC, which is specifically charged with maintaining fair and orderly markets. Furthermore, as previously noted, financial regulators expect markets to resume both clearing and trading activities within 1 business day or less. In addition, according to Treasury staff responsible for its critical infrastructure protection program, ensuring that markets are not closed for lengthy periods is important to maintaining investor confidence during the uncertainty that accompanies major disasters. SEC officials said that if the organizations and firms expected to adhere to the guidance and best practices in the interagency paper and SEC’s policy statement did so, U.S. securities markets would be able to recover even from an attack or disaster that resulted in wide-scale damage or disruption. However, SEC officials explained that they do not have specific authority to require broker-dealers to participate in the markets to any degree and neither the interagency paper on clearing and settlement, the SEC policy statement, nor the NYSE and NASD business continuity rules currently require individual broker-dealers to be prepared to resume their trading operations following a disaster. Although the ability to resume trading will also depend on whether sufficient numbers of trading firms are willing and able to resume operations, concerns persist over the potential readiness and the threat of disruption to these firms. As we discussed in our 2003 report, part of the delay in reopening the trading markets after the September 2001 attacks was attributable to the difficulties that some broker-dealers faced in recovering their trading operations. As we noted previously in this report, some of the key trading firms continue to face increase risk that their operations would be disrupted and acknowledged that they may not be able to resume trading in some cases. Furthermore, in August 2004, DHS announced that intelligence had been received that terrorists may have targeted the facilities of individual U.S. banks and broker-dealers as well as other financial related entities for potential attacks. Although SEC had taken some steps to assess broker-dealer readiness, it had not done a systematic analysis to determine whether sufficient numbers of firms would be capable of resuming trading within regulators’ current expectations. SEC staff said they were aware of this risk and had done some informal assessments of where major broker-dealer facilities are located. The staff also noted that some firms could likely use staff located elsewhere in the country or in foreign locations to trade on U.S. markets. However, officials at some of the key firms we contacted indicated that they did not always have sufficient numbers of trained staff elsewhere who could assume their U.S. trading activities. One of the officials told us in June 2004 that SEC would begin evaluating broker-dealers’ trading staff arrangements and, where appropriate, ask firms to voluntarily address the risk posed by having their trading staff in single locations in the same geographic area as other such organizations. One of the officials said that SEC did not yet have a time frame in which firms would complete such actions and acknowledged that such organizations could have valid business reasons for not taking those actions. For example, relocating trading staff or spreading them across more than one location can be expensive and reduce the efficiency of a firm’s operations. SEC officials also told us that if a wide-scale disaster disrupted trading at a number of broker-dealers in one geographic area, firms outside that area could step in and conduct trading. Such firms could include the regional broker-dealers located around the country. However, SEC staff had not conducted a full analysis of the number of firms, where they are located, or the amount of trading volume they normally handle. These firms also would need sufficient staffing and financial resources to support increased trading volumes. Since our 2003 report, SEC has acted to improve the ARP program, but has not addressed other long-standing issues that hamper the effectiveness of the program and hinder SEC’s oversight. These issues include insufficient resources with the appropriate expertise to increase the frequency, depth, and comprehensiveness of its examinations and the lack of a rule that mandates compliance with the ARP program’s tenets and examination recommendations. The ARP program also appears to have limitations in its ability to oversee information security issues. Given the limitations that affected the ARP program over time, continued assessment of whether the ARP program’s placement within SEC’s organizational structure might identify options that could better assure that it receives the appropriate resources to perform its important mission. SEC created the ARP program in 1989 in response to operational problems that markets experienced during the 1980s at exchanges, and clearing organizations, and later, ECNs. The program addresses operations risk issues at these entities, including physical and information security and business continuity. SEC did not create rules for these entities to follow but instead issued two ARP statements that provided best practices in various information technology and operational areas with which the exchanges and clearing organizations would be expected to comply voluntarily. As part of the ARP program, these entities (among them, some of the critical organizations we reviewed for this report) are expected to have the relevant aspects of their operations reviewed periodically by independent reviewers, which can include the entities’ own internal auditors or external organizations, such as accounting firms or information security consultants. In addition, SEC’s ARP staff conduct periodic on-site reviews of these organizations to assess selected information technology or operational issues and make recommendations for improvements when necessary. During any examination, ARP program staff analyze the risks faced by each entity to determine which are the most important to review. As a result, ARP staff are not expected to review every issue specific to an entity during each examination. SEC staff said they have made improvements to the ARP program. SEC officials said they have placed more emphasis on monitoring the status of the recommendations made as result of ARP reviews, with the result that they can better determine whether entities within the program implement the recommendations. ARP staff meet quarterly with ARP management to review the status of and progress on any outstanding ARP recommendations. As a result, ARP staff have more frequent contact with the entities they examine to obtain information about the status of recommended actions. According to these officials, this more frequent follow-up lets the exchanges, clearing organizations, and ECNs know that they cannot let action on recommendations wait until the next ARP review, which can be several years away. ARP officials said that as a result of these efforts, they have been able to close outstanding recommendations and indicated that the level of cooperation they receive from the entities has improved. SEC staff also said that a recent reorganization within its Division of Market Regulation also improved program effectiveness. According to SEC staff, in November 2003, SEC merged ARP program staff with other Division of Market Regulation staff that conducted surveillance of trading in the markets using information systems. While remaining within the Division of Market Regulation, this combined group is now called the Office of Market Continuity. Although the merger only marginally increased the number of staff allocated to the ARP program (from 10 to 11 staff and a new Assistant Director), SEC staff said the merger gave them access to some additional staff resources and also increased the visibility of the ARP program within SEC. These additional staff are not examiners but can be used to draft letters and research legal issues. Although it has taken some actions to improve the ARP program, SEC still has not addressed weaknesses that have hampered the effectiveness of the program, such as making ARP a rule-based program and improving ARP’s staffing resources and expertise. As we reported in 2001 and 2003, the entities subject to the ARP program had not always implemented or addressed significant ARP staff recommendations, including some related to inadequate backup facilities, security weaknesses, and inadequate information system processing capacity. Some of these unaddressed weaknesses later led to problems. For example, one organization experienced problems related to ensuring adequate processing capacity that delayed the implementation of decimal pricing by all securities markets for 3 months. In another instance, SEC staff raised concerns about the lack of a backup operating facility at an entity that had its primary facility in the area that would later be affected by the 2001 terrorist attacks. In some cases, organizations subject to ARP were also not providing the reports of system changes and other events that SEC expects to receive under the program. To address this issue, we recommended in our 2003 reports that SEC issue a rule that would make adherence to tenets of the ARP program and the recommendations of its staff mandatory for exchanges and clearing organizations. In contrast, ECNs have had to comply with ARP recommendations since 1998, when SEC adopted a rule increasing regulatory scrutiny of alternative trading systems. SEC’s Inspector General has also expressed similar concerns about compliance with ARP program recommendations. SEC officials said they drafted a rule making exchange and clearing organization compliance with ARP tenets mandatory but had not yet submitted it for review by the SEC Commissioners. SEC staff told us that the level of cooperation with recommendations and other expectations that they have received from the entities subject to the ARP program has improved since the 2001 terrorist attacks. However, they acknowledged that without a rule SEC lacks greater assurance that these organizations will continue to comply with ARP recommendations, particularly key recommendations that could be costly for the entities. SEC also has not fully addressed the adequacy of resources dedicated to the ARP program, another long-standing issue. Our 2001 and 2003 reports described how a lack of resources hampered the ability of the ARP program to oversee the operations of the entities it reviews. For example we reported that these resource constraints affected the ARP program’s ability to conduct frequent examinations. In our 2003 report, we reported that the intervals between ARP examinations had exceeded 3 years for five of the seven critical financial market organizations that we reviewed, with the other two organizations not being reviewed for 6 years or more. According to SEC staff, they have developed a tiered examination schedule for the organizations subject to ARP. Under this schedule, first-tier organizations, including the clearing organizations and most active markets, are to be reviewed annually. Second-tier organizations are reviewed based on their risk assessment profile under a 3-year inspection cycle, and third-tier firms, such as small ECNs are inspected for cause. The SEC staff said they have met this schedule thus far. As a result of these concerns, we recommended in 2003 that SEC expand the level of staffing and resources devoted to ARP if sufficient funds were available. Although in recent years, SEC’s overall resources have significantly increased—its funding increased 45 percent in 2003—as of May 2004, no significant additional resources had been allocated to the ARP program. SEC staff said the recent creation of the Office of Market Continuity provided them with access to some additional staff resources, as noted earlier, but demands on ARP staff also have grown. For example, in our 2003 report, we noted that ARP staff workload had expanded to cover entities with more complex technology and communications networks. As entities continue to implement new technologies and networks, ARP staff workload is likely to increase further. In August 2004, staff in SEC’s Market Regulation Division said they will ask for additional staffing for the ARP program. The ARP program’s ability to obtain and retain staff with sufficient technical skills has also been an issue in the past and may have affected its ability to effectively oversee information security issues at the entities it oversees. In previous reports, we have described difficulties SEC has had in retaining qualified and experienced staff in its ARP program, as well as concerns of industry officials over ARP staff expertise. During this review we identified examples where ARP staff could benefit from additional technical expertise. For example, reviews by internal and external reviewers are a key component of the ARP program and SEC officials said they attempt to track all significant issues and recommendations to ensure they are addressed. However, we found that internal and external reviewers at some of the critical organizations we reviewed had identified important actions to improve the security of their information systems, but that the organizations had not implemented them. In addition, at some of the critical organizations, we identified important additional opportunities for improvements in information security that had not been previously identified by internal or external reviewers or by SEC’s ARP staff. One way organizations can help ensure that their various functions receive the appropriate level of resources, including staff and expertise, is to ensure that those functions are properly aligned within the organization’s overall structure. Currently, the ARP program is located within the Division of Market Regulation and, as such, is a small part of a larger division whose primary responsibility is to establish and maintain standards for the operation of fair, orderly, and efficient markets. As noted previously, SEC recently relocated the ARP program within the Division of Market Regulation, and SEC officials told us that this move has been beneficial and that they continue to assess the impact of the reorganization on the program’s effectiveness. However this move has not yet resulted in significant additional staffing or additional technical expertise specifically dedicated to the ARP program. Other possible placements that might prove beneficial for the ARP program from a resource and expertise standpoint could include placing the ARP program with the other examination staff within SEC’s Office of Compliance Inspections and Examinations, or combining its staff with those having similar technical expertise within SEC’s Office of Information Technology. Realigning the ARP program within SEC could, however, have potential disadvantages. For example, having ARP staff within the Division of Market Regulation, as it is now, provides valuable expertise and information gathering abilities and allows this examination function to be linked with the related policy-making function. The securities market organizations we reviewed all had reduced the risk that their operations would be disrupted by terrorist attacks or other disasters. In addition, financial market participants and telecommunications organizations increased the resiliency of the critical telecommunications services necessary for the functioning of the markets. Further, financial regulators have issued guidance to these organizations that, if implemented, should greatly increase the ability of the markets to recover. However, as of May 2004, a number of the critical financial market organizations and the broker-dealers and banks that conduct significant trading activities remained at a greater risk of disruption than others from a wide-scale event because they lacked certain business continuity capabilities. The ability of U.S. financial markets to recover and resume operating in the wake of any future attacks or disasters depends upon the extent to which these critical market participants augment their business continuity capabilities or mitigate existing weaknesses. One of the lessons learned from the September 2001 attacks was that without key broker-dealers able to trade, the markets could not reopen. As we noted in our 2003 report, insufficient liquidity existed to open the markets during the week of the September 2001 attacks because of the considerable efforts required for broker-dealers to restore operations. However, SEC currently lacks adequate assurance that the actions of organizations that trade in the markets will be sufficient to ensure that this important activity can also resume. Although joint regulatory guidance addresses organizations’ clearing and settlement activities, and SEC’s own policy statement directs exchanges and ECNs to implement sound business continuity practices, the firms that conduct trading activities in U.S. markets are not similarly required to implement such practices, and SEC officials said they do not have specific authority to require broker-dealers to participate in the markets to any degree. Nevertheless, SEC has not fully assessed whether or not sufficient numbers of firms with staff capable of trading securities would to be ready to operate after a wide-scale disaster. Similarly, although many other trading firms exist, including regional firms with sizeable operations located throughout the United States, SEC has not sufficiently analyzed the willingness and capabilities of these firms to step up and become the significant providers of liquidity necessary for fair and orderly trading to occur in the aftermath of a disaster. Once it conducts a more complete analysis of the likely readiness of trading firms to resume trading, SEC could use the results to identify actions that specific exchanges, clearing organizations, or trading firms could take to increase the likelihood that trading in the markets could resume when appropriate. Given that some disaster and damage impact scenarios are more or less likely than others, having SEC weigh the feasibility and costliness of any actions that it identifies against the potential benefits and likelihood of such scenarios occurring appears warranted. While SEC has made some enhancements to the ARP program, it has also not made key improvements, including those we recommended in our 2003 report, that could better ensure that it is as credible and as effective as possible. Given the importance of the work with which SEC’s ARP staff are tasked, ensuring that they have a specific rule to mandate compliance with ARP program tenets and sufficient staff to conduct their oversight appears justified. While SEC has made progress in ensuring that exchanges and clearing organizations implement ARP staff recommendations, such current voluntary cooperation may not always exist in the future, especially when ARP-recommended actions would be costly to an organization. The limited resources that SEC has devoted to ARP thus far have generally prevented it from conducting more frequent examinations and do not appear to have provided it with sufficient technical expertise to address important information security issues. While the ARP program was realigned within the Division of Market Regulation in November 2003 and SEC staff indicated that they are assessing the impact on the program’s effectiveness, it is not yet clear whether this change will improve the program’s ability to obtain sufficient additional resources and staff with the necessary expertise. Given that the functioning of the markets is critical to our nation’s economy, taking steps to better ensure that the program used to oversee operational and information security issues at these entities has sound legal authority and adequate resources and expertise is warranted at this time. Such steps would include assessing whether the placement of the program within SEC’s organizational structure is optimal for ensuring that it has adequate resources and staff expertise. To provide greater assurance that the critical trading that is conducted in U.S. financial markets can resume, in as timely a manner as appropriate, after disruptions, we recommend that the Chairman, SEC, fully analyze the readiness of the securities markets to recover from major disruptions and work with industry and other federal agencies, as appropriate, to determine reasonable actions that would increase the likelihood that trading in the markets could resume when appropriate. In addition, to improve the effectiveness of SEC’s ARP program, which oversees preparedness of securities trading and clearing organizations for future disasters, we recommend that the Chairman, SEC, take the following three steps to enhance the ARP program’s effectiveness: Establish a definite time frame for the submission of a rule requiring exchanges and clearing organizations to engage in activities consistent with the operational practices and other tenets of the ARP program; Assess the adequacy of ARP staffing in terms of positions and technical skill levels, including information security expertise, given its mission and workload; and Continue to assess the organizational alignment of the ARP program within SEC. We requested comments on a draft of this report from the heads, or their designees, of the Federal Reserve, OCC, Treasury, and SEC. The Federal Reserve and SEC provided written comments, which appear in appendixes II and III, respectively. The Federal Reserve, OCC, and SEC also provided technical comments, which we incorporated in the report as appropriate. SEC generally agreed with the report and its recommendations. The letter from SEC’s Chairman noted that SEC has been working actively with the trading markets, core clearing organizations, and major market participants to strengthen the resiliency of the financial markets. In addition, SEC’s letter noted that it would be taking specific actions in response to our recommendations, including conducting an assessment of key broker-dealers’ trading staff arrangements and the preparations of these firms to resume trading operations following a disaster. SEC also indicated that its Market Regulation Division is developing a proposed rule that would require exchanges and clearing organizations to engage in activities consistent with the operational practices and other tenets of the ARP program and that this should be submitted to the Commission during the first half of 2005. SEC stated that it is also currently assessing the adequacy of staffing and technical skill levels within the ARP program and that increased education for its staff, hiring new staff, and engaging consultants are all ways that it could use to address its needs in this area. Finally, SEC noted that as part of the agency’s routine strategic planning effort, it will continue to assess the organizational alignment of the ARP program within SEC. In its letter, the Federal Reserve noted that addressing the risks posed by the September 11 attacks continues to be a priority for the organization and that it is continuing efforts to improve the resiliency of the financial system. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; the Secretary, Treasury; the Chairman, SEC; the Chairman, Federal Reserve; and the Comptroller of the Currency; and others who request them. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. The objective of this report is to describe the progress that financial markets participants and regulators have made since our 2003 report in reducing the likelihood that terrorist attacks and other disasters would disrupt market operations. Specifically, we assessed (1) actions that critical securities market organizations and key market participants undertook to reduce their vulnerabilities to physical or electronic attacks and to improve their business continuity capabilities; (2) steps that financial market participants, telecommunications industry organizations, and others took to improve the resiliency of telecommunications systems and infrastructure; (3) financial regulators’ efforts to ensure the resiliency of the financial markets; and (4) the progress the Securities and Exchange Commission (SEC) has made in improving its Automation Review Policy program, which oversees security and operations issues at exchanges, clearing organizations, and electronic communications networks (ECN). As in our previous report, for purposes of our analysis we selected seven organizations whose ability to operate is critical to the overall functioning of the financial markets. We made these categorizations by determining whether viable immediate substitutes existed for the products or services the organizations offer or whether the functions they perform were critical to the overall markets ability to function. To maintain the security and the confidentiality of their proprietary information, we agreed with these organizations that we would not discuss their efforts to address physical and information security risks and ensure business continuity in a way that could identify them. To assess actions that critical securities market organizations took to reduce their vulnerabilities to physical or electronic attacks and to improve their business continuity capabilities, we visited their facilities, reviewed relevant business continuity policies, and interviewed officials at the organizations. Specifically, to determine what steps these seven organizations were taking to reduce the risks to their operations from physical attacks, we conducted on-site “walkthroughs” of their facilities, reviewed their security policies and procedures, and met with key officials responsible for physical security to discuss these policies and procedures. We compared these policies and procedures with 52 standards developed by the Department of Justice for federal buildings. Based on these standards, we evaluated the physical security efforts across several key operational elements, including measures taken to secure perimeters, entryways, and interior areas and whether organizations had conducted various security planning activities. To identify types of tests an organization can perform to monitor the effectiveness of physical security measures in place, we reviewed publications and guidance, such as that contained in our Executive Guide on Information Security Management and obtained information from security experts within our office, including Office of Special Investigations. We obtained information on the types and extent of physical security testing performed by the organizations at their primary locations and compared it with the information we collected. We also reviewed publications and guidance, such as those issued by the Centers for Disease Control and Prevention, Federal Emergency Management Administration, and Lawrence Berkeley National Laboratory, to identify high-level countermeasures that an organization could take to mitigate the CBR threat. For each primary facility, through interviews with the organizations’ security officials, we identified and compared their actions against our listing of countermeasures. To determine what steps these seven organizations were taking to reduce the risks to their operations from electronic attacks, we reviewed the security policies of the organizations we visited and reviewed documentation of their system and network architectures and configurations. We also compared their information security measures with those recommended for federal organizations in the Federal Information System Controls Audit Manual, other federal guidelines and standards, and various industry electronic security best practice principles. Using these standards, we attempted to determine, through discussions and document reviews, how these organizations had addressed various key operational elements for information security, including how they controlled access to their systems and how they detected intrusions, what responses they made when such intrusions occurred, and what assessments of their systems’ vulnerabilities they had performed. To determine what steps these seven organizations had taken to ensure they could resume operations after an attack or other disaster, we discussed their business continuity plans (BCP) with staff and visited their facilities. We reviewed their BCPs and assessed them against practices recommended for financial organizations, including bank regulatory guidance. Among the operational elements we considered were the existence and capabilities of backup facilities, whether the organizations had procedures to ensure the availability of critical personnel and telecommunications, and whether they completely tested their plans. In evaluating these organizations’ backup facilities, we attempted to determine whether these organizations had backup facilities that would allow them to recover from damage to their primary sites or from damage or inaccessibility, resulting from a wide-scale disaster. We did not directly observe the operation of these backup sites, but relied on documentation, including backup facility test results, provided by the organizations. We also discussed the business continuity capabilities and improvements made by eight large broker dealers and banks that collectively represented a significant portion of trading and clearing volume on U.S. securities markets. To determine the extent to which critical financial market organizations reduced the likelihood that their operations might be disrupted by future disasters, we also examined the telecommunications continuity practices they were following. To identify sound telecommunications-related continuity practices, we first reviewed business continuity planning guidance published by the Business Continuity Institute, the Federal Financial Institutions Examination Council, and other continuity planning guidance. Based on our review of those materials, we identified five principal practices that organizations should follow to plan for the availability of telecommunications services that are important to their continuing operations. We also discussed our selection of practices for use as criteria with a private-sector business continuity expert to affirm that our selection of these five practices was an appropriate judgment. We then examined the extent to which the critical organizations followed these practices by reviewing network documentation, continuity plans, and testing reports where available, and discussed with organization telecommunications managers their network continuity strategies and the practices they followed to mitigate perceived continuity risks. We assessed those strategies, practices, and related documentation against the five practices we identified. To determine how financial and telecommunications industry organizations, federal and local government entities, and supporting telecommunications service providers further improved telecommunications service resiliency, including improved infrastructure diversity and recoverability, we reviewed reports and related documentation prepared by three Presidential Advisory Committees—the National Infrastructure Advisory Council, the National Security Telecommunications Advisory Council, and the Network Reliability and Interoperability Council. These reports and documentation evaluated infrastructure interdependencies and network diversity challenges, and they identified practices that telecommunications carriers and large organizations might follow to better prepare for and recover from future network disruptions. We also reviewed plans and documentation developed by a critical financial organization to implement and operate a private network for the benefit of financial market participants. In addition, we met with managers at the Board of Governors of the Federal Reserve (the Federal Reserve) and the federal National Communications System to obtain data on the use of federal national security/emergency preparedness programs by the financial industry to improve the recoverability of important telecommunications services. We also met with New York City officials to review the status of their efforts to reestablish an agreement to coordinate and monitor the recovery of local infrastructure in the event of future service outages. Finally, we met with managers at three large telecommunications carriers to review how they were rebuilding local infrastructure in New York City, and steps taken to review and revise their own continuity plans. To assess financial regulators’ efforts to ensure the resiliency of the financial markets, including the progress SEC has made in improving its program for overseeing security and operations issues at exchanges, clearing organizations, and ECNs, we reviewed relevant regulations and interviewed officials at SEC, the Federal Reserve, Office of the Comptroller of the Currency, and the Department of Treasury. We also discussed initiatives to improve responses to future crises and improve the resiliency of the financial sector and its critical telecommunications services with representatives of industry trade groups, including the Bond Market Association and the Securities Industry Association. For our reviews, we relied on documentation and descriptions provided by market participants and regulators and reviews conducted by other organizations. When feasible, we also directly observed controls in place for physical security, electronic security, and business continuity at the organizations assessed. We did not test these controls by attempting to gain unauthorized entry or access to facilities or information systems, or directly observe testing of business continuity capabilities. We performed our work from September 2003 through August 2004 in accordance with generally accepted government auditing standards. The Department of Homeland Security (DHS), created to help coordinate the efforts of organizations and institutions involved in protecting the nation against terrorist attacks, has essentially delegated to Treasury this coordinating role within the banking and finance sector. In 2002, the Homeland Security Act created DHS, which was given responsibility for developing a national plan to protect the nation’s critical infrastructure. Homeland Security Presidential Directive 7 (HSPD-7), issued in December 2003, further stated that the Secretary of DHS, would be responsible for coordinating the overall national effort to enhance the protection of the critical infrastructure of the United States. HSPD-7 also stated that it is U.S. policy to enhance the protection of these critical infrastructures against terrorist attacks that could, among other things, damage the private sector’s capability to ensure the orderly functioning of the economy. To fulfill these objectives, HSPD-7 directs the Secretary of DHS to work closely with other federal departments and agencies, and designates specific agencies to coordinate efforts within certain sectors. Within the banking and finance sector, Treasury was given responsibility for collaborating with all relevant federal, state, and local officials, as well as the private sector. To fulfill this responsibility, Treasury coordinates with other federal financial regulators through the Financial and Banking Information Infrastructure Committee (FBIIC), whose members include representatives of the various regulators of banks, broker-dealers, futures commission merchants, and housing government sponsored enterprises, as well as other related organizations. Treasury coordinates its collaboration with the private sector through the Financial Services Sector Coordinating Council (FSSCC), whose members include representatives from exchanges, clearing organizations, and banking and securities trade associations. According to Treasury officials, they coordinate with DHS in several ways. For example, a FBIIC member attends weekly meetings of DHS’s Directorate of Information Analysis and Infrastructure Protection (IAIP), which identifies and assesses threats and issuing timely warnings on those threats. According to Treasury, the FBIIC member at those meetings provides input on the needs of the financial sector as well as the relevancy for that sector of any identified threats. In addition, Treasury has worked with DHS to plan disaster recovery exercises, such as the TOPOFF exercises, which simulate physical attacks. Treasury is also working with DHS to continue developing “Chicago First,” an emergency preparedness program designed to coordinate activities among financial sector participants and federal, state, and local government officials. Treasury is promoting this program as a model for other cities to implement. Finally, the Secretary of the Treasury, along with the Director of the Office of Homeland Security is a member of the Homeland Security Council, which ensures the coordination of homeland security activities among executive departments and agencies. Representatives of the Homeland Security Council, in turn, are members of FBIIC. According to FSSCC officials, they are interacting with DHS in at least two ways. First, DHS has asked FSSCC to prepare an updated version of the banking and finance sector’s portion of the national strategy for critical infrastructure assurance, the first version of which was completed in May 2002. FSSCC expected to complete the updated version in June 2004. Second, FSSCC representatives have taken part in quarterly meetings between DHS and other sector coordinators. According to FSSCC officials, this group has produced a matrix outlining the responsibilities of the different sectors. In addition to the individuals named above, Edward Alexander, Gerald Barnes, Lon Chin, West Coile, Kevin E. Conway, Kirk Daubenspeck, Ramnik Dhaliwal, Patrick Dugan, Edward Glagola, Harold Lewis, Thomas Payne, Barbara Roesmann, Eugene Stevens, Patrick Ward, Christopher Warweg, and Anita Zagraniczny made key contributions to this report. Critical Infrastructure Protection: Establishing Effective Information Sharing with Infrastructure Sectors. GAO-04-699T. Washington, D.C.: April 21, 2004. Securities and Exchange Commission: Preliminary Observations on SEC's Spending and Strategic Planning. GAO-03-969T. Washington, D.C.: July 23, 2003. Potential Terrorist Attacks: Additional Actions Needed to Better Prepare Critical Financial Market Participants. GAO-03-251. Washington, D.C.: February 12, 2003. Potential Terrorist Attacks: Additional Actions Needed to Better Prepare Critical Financial Market Participants. GAO-03-414. Washington, D.C.: February 12, 2003. Critical Infrastructure Protection: Effort of the Financial Services Sector to Address Cyber Threats. GAO-03-173. Washington, D.C.: January 30, 2003. SEC Operations: Increased Workload Creates Challenges. GAO-02-302. Washington, D.C.: March 5, 2002. A Model of Strategic Human Capital Management. GAO-02-373SP. Washington, D.C.: March 15, 2002. Information Systems: Opportunities Exist to Strengthen SEC's Oversight of Capacity and Security. GAO-01-863. Washington, D.C.: July 25, 2001. Homeland Security: Efforts to Improve Information Sharing Need To Be Strengthened. GAO-03-760. Washington, D.C.: June 29, 2001. Human Capital: A Self-Assessment Checklist for Agency Leaders, Version 1. GAO/OCG-00-14G. Washington, D.C.: September 2000. Federal Information System Controls Audit Manual, Volume I: Financial Statement Audits. GAO/AIMD-12.19.6. Washington, D.C.: January 1999. Executive Guide on Information Security Management: Learning from Leading Organizations. GAO/AIMD-98-68. Washington, D.C.: May 1, 1998. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” | In February 2003 reports, GAO identified actions needed to better prepare critical financial market participants for wide-scale disasters, such as terrorist attacks. To determine progress made since then, GAO assessed (1) actions that critical securities market organizations took to improve their ability to prevent and recover from disruptions, (2) actions that financial market and telecommunications industry participants took to improve telecommunications resiliency, (3) financial regulators' efforts to ensure the resiliency of the financial markets; and (4) SEC's efforts to improve its program for overseeing operations risks at certain market participants. The critical securities market organizations and market participants GAO reviewed had taken actions, since GAO's previous reports, to further reduce the risk that their operations would be disrupted by terrorist attacks or other disasters. For example, they had added physical barriers, enhanced protection from hackers, or established geographically diverse backup facilities. Still, some entities had limitations that increased the risk that a wide-scale disaster could disrupt their operations and, in turn, the ability of securities markets to operate. For example, three organizations were at a greater risk of disruption than others because of the proximity of their primary and backup facilities. In addition, four of the eight large trading firms GAO reviewed had all of their critical trading staff in single locations, putting them at greater risk than others of a single event incapacitating their trading operations. Geographic concentration of these firms could leave the markets without adequate liquidity for fair and efficient trading in a potential disaster. Since GAO last reported, actions were taken to improve the resiliency of the telecommunications service critical to the markets, including creating a private network for routing data between broker-dealers and various markets. Maintaining telecommunications redundancy and diversity over time will remain a challenge. Financial market regulators also took steps that should reduce the potential that future disasters would disrupt the financial markets, such as issuing business continuity guidelines for financial market participants designed to reopen trading markets the next business day after a disruption. However, despite the risk posed by the concentration of broker-dealers' trading staffs, and the lack of regulations requiring broker-dealers' to be prepared to operate following a wide-scale disruption, SEC had not fully analyzed the extent to which these organizations would be able to resume trading following such a disruption. Furthermore, while SEC has made some improvements to the voluntary program it uses to oversee the information security and business continuity at certain critical organizations, it has not taken steps to address key long-standing limitations. Despite past difficulties obtaining cooperation with recommendations and a lack of resources to conduct more frequent inspections, SEC had not proposed a rule making this program mandatory or increased the level of the program's resources--as GAO has previously recommended. In addition, SEC appeared to lack sufficient staff with expertise to ensure that the organizations in the program adequately addressed the issues identified in internal or external reviews, or to identify other important opportunities for improvement. Although SEC staff continue to assess the impact of a recent reorganization involving the programs staff, whether the current placement of the program within SEC is adequate for ensuring that the program receives sufficient resources is not yet clear. |
The Convention on Nuclear Safety, which became effective for the ratifying countries on October 24, 1996, seeks to achieve and maintain a high level of safety for all nations that operate civil nuclear power reactors. (According to the International Atomic Energy Agency [IAEA], as of December 31, 1995, 32 countries operated 437 nuclear power reactors.) The U.S. government views the Convention as one of the chief policy instruments to encourage Russia and other countries with reactors that do not meet Western safety standards to improve safety. The Convention calls on countries to take action to, among other things, (1) establish and maintain a legislative framework and independent regulatory body to govern the safety of nuclear installations; (2) establish procedures to ensure that technical aspects of safety, such as the siting, design, and construction of nuclear power reactors, are adequately considered; and (3) ensure that an acceptable level of safety is maintained throughout the life of the installations by such things as giving a priority to safety, providing adequate financial resources, and establishing a quality assurance program. The Department of State, the Department of Energy (DOE), and the Nuclear Regulatory Commission (NRC) have participated in the development and implementation of the Convention. NRC, in its capacity as the U.S. civilian nuclear regulatory authority, will play a central role in implementing U.S. obligations under the Convention. The Convention establishes IAEA as the Convention’s secretariat primarily to (1) convene and prepare for the meetings and (2) transmit reports and information to member countries. The method to review countries’ compliance with the Convention has not been finalized. The Convention relies on the ratifying countries to prepare reports (self-assessments of their nuclear power programs) that are expected to describe how they are complying with the Convention. However, the reports’ level of detail and specifics and the process for examining the reports have not been fully determined. Although U.S. and IAEA officials believe the Convention will encourage openness about countries’ safety programs, it is uncertain how much information will be made available to the public. The Convention does not impose sanctions for noncompliance but seeks to encourage compliance through peer pressure. To determine compliance with the terms of the Convention, countries are required to meet periodically to review one another’s safety programs. State, DOE, and NRC officials have stated that this peer review process is central to the Convention’s success, noting that it will enable the countries’ safety practices to be brought before the “bar of world public opinion.” The Convention does not specify the form and content of the peer review process but calls on the parties to (1) submit self-assessment reports of the measures they have taken to implement the Convention and (2) hold meetings to review these reports. Representatives of over 40 countries, including the United States, have met on several occasions over the past 2 years to develop options for implementing the peer review process. The United States has chaired these sessions. In June 1996, the representatives agreed on a model to implement the peer review process, but final decisions will not be made until all of the ratifying countries meet no later than April 1997, as required by the Convention. As the process is currently envisioned, the five countries with the most operating nuclear reactors—the United States, France, Japan, the United Kingdom, and Russia—would participate in separate groups made up of several other countries that have ratified the Convention. The remaining countries are placed in each group on the basis of the number of reactors in each country, as shown in table 1. Within this group setting, all countries would critically examine and review how each country is complying with the Convention. IAEA officials told us that the country-review groups form the core of the peer review process. NRC officials have expressed some concern about the potential grouping of countries. In their view, this approach may not provide the most meaningful, professionally technical review. For example, the United States, which spent about $89 million through March 1996 to improve the safety of Soviet-designed reactors, would not be in the same review group as Russia or Ukraine, countries that operate the majority of these reactors. In addition to its ongoing safety assistance program, the United States also has significant technical expertise and years of practical experience working to improve the safety of these reactors and improve these countries’ civilian nuclear regulatory capabilities. The United States had earlier supported a different approach in which each country’s self-assessment would be reviewed by separate subject matter committees. This review would be based on three main elements of the Convention: (1) governmental organization; (2) siting, design, and construction; and (3) operations. The U.S.-favored approach was replaced by the country-grouping model proposed by France and the United Kingdom. Representatives of these countries maintained that the smaller groups of countries would allow for a more thorough and unified review of a country’s report than would a functional review of part of a country’s report, as initially envisioned by the United States. The Convention states that each country shall have a reasonable opportunity to discuss and seek clarification of the reports of any other party at the review meeting. As a result, NRC and IAEA officials believe that regardless of how the countries are ultimately grouped, the United States will have ample opportunity to review and comment on the self-assessment reports of all countries. For example, according to NRC and IAEA officials, countries may be permitted to participate in other groups’ meetings as observers and discuss their concerns in supplemental meetings. Countries are also expected to have opportunities to comment on the self-assessment reports at general sessions held during the review meeting. The detail and specifics of the self-assessment reports—which serve as the basis for the meeting of the parties—have not been finalized. These reports are expected to describe how the parties are complying with the Convention. Because of the differences in countries’ nuclear safety programs and available resources, NRC officials anticipate an unevenness in the quality and detail of the reports. In their view, this unevenness could affect the level of review and analysis. U.S. officials also stated that the countries with a significant number of nuclear installations may produce a generic rather than a plant-specific report. The public dissemination of information about the countries’ progress in meeting the Convention’s obligations can play a key role in influencing compliance, according to some experts familiar with international agreements that rely primarily on peer review. Although U.S. and IAEA officials believe the Convention will encourage greater openness about many countries’ safety records and programs, it is uncertain how much information resulting from the periodic meetings will be made available to the public. According to NRC officials, the countries can limit the distribution of their reports. These officials noted, however, that the United States plans to make its report available to the public. Although the Convention provides for the public distribution of a report summarizing the issues discussed and decisions reached during the review meeting, preliminary information indicates that this report is unlikely to identify any country by name. IAEA officials told us that they do not expect this report to provide detailed information about the key issues addressed during the review meeting. According to IAEA, the Convention explicitly prohibits nongovernmental organizations from participating in the meetings. NRC officials told us however that these organizations, such as public advocacy or industry groups, might participate as members of their national delegation or be called upon to review and comment on self-assessment reports. U.S. nuclear industry representatives told us that they would like to assist in developing the U.S. report and participate in the meeting of the parties. NRC officials acknowledged that the Convention does not specifically provide for the kind of openness they would prefer, but they believe that over time, more information will be made available to the public through the Convention process. To prepare for and attend the first review meeting in 1999, the United States estimates it could spend as much as $1.1 million. As the Convention’s secretariat, IAEA will also incur costs to administer these meetings. IAEA’s costs, which the United States will partially fund, have not been fully identified but could range as high as about $10 million, according to a 1993 estimate. NRC officials told us that they believe IAEA’s costs will be significantly less—about $1 million. The United States estimates that it could spend between $700,000 and $1.1 million through fiscal year 1999 to prepare for and attend the first review meeting, which is expected to be held in April 1999. Additional costs to participate in subsequent review meetings, which are expected to be held every 3 years, have not been estimated. Officials from NRC, State, and DOE told us that the costs associated with the first review meeting are based on (1) participating in four planning meetings held between December 1994 and June 1996 to develop the Convention’s draft policies and procedures, (2) preparing the first U.S. self-assessment report, (3) reviewing other countries’ reports, and (4) participating in the April 1997 preparatory meeting and the first review meeting. The agencies’ estimated costs include the existing and planned travel costs associated with attending meetings at IAEA headquarters in Vienna, Austria, and salary and benefit costs related to the time spent preparing for these meetings. Figure 1 shows the breakdown of estimated costs by agency through the first meeting of the parties. Salary and benefits constitute 94 percent of the agencies’ costs; the remainder is for travel and per diem expenses. The salary and benefit costs result from the efforts of agency staff to prepare the first U.S. self-assessment report, review all other countries’ reports as part of the peer review process, and participate in all aspects of the first review meeting. (See app. II for a breakdown of expenditures by each agency.) In late 1993, a working group that participated in the drafting of the Convention estimated that IAEA’s costs could range from $10,800 to $10.3 million for the first review meeting. NRC officials told us that they believe that IAEA’s actual costs will be significantly less—about $1 million to administer the first review meeting. The factors affecting IAEA’s costs primarily involve the number of languages used to conduct the meeting of the parties and the corresponding translation and interpretation services.IAEA’s costs to administer future review meetings have not been estimated. The Convention states that IAEA will bear the cost of administering the meeting of the parties. IAEA’s cost of holding the meeting in Vienna is expected to be funded from IAEA’s operating budget, which the United States supports through an annual 25-percent contribution. IAEA’s 1997 and 1998 budget shows that IAEA plans to dedicate about $330,000 in 1997 and 1998 for Convention-related activities. According to an NRC official, IAEA, whose regular budget has been subject to a policy of “zero real growth” since 1985, may have difficulty financing the initial review meeting. As a result, this official said that additional financial assessments of participating countries may be warranted to provide the necessary funds for IAEA to administer the Convention. The need for additional financial assessments will have to be addressed during the April 1997 preparatory meeting. NRC officials told us they were concerned about IAEA’s potential costs to administer the Convention and that the United States will seek to keep these costs to a minimum. The Convention also permits participating countries to request, after receiving consensus approval from the other countries, additional support and administrative services from IAEA. IAEA’s Deputy Director General for Nuclear Safety told us that it is likely that IAEA will receive requests for such assistance and would cover these costs from its regular budget. NRC and DOE officials told us that they believe the Convention will not stimulate any significant requests for additional assistance to upgrade unsafe reactors. An NRC official told us that as a result of the meetings, there may be some reordering of assistance priorities, but he noted that requirements have already been identified over the past several years through regular multilateral and bilateral assistance channels. A DOE official noted that by the time the first meeting of the parties occurs in 1999, some Western assistance efforts should be winding down, and many safety upgrades will have already been made. IAEA’s Deputy Director General for Nuclear Safety told us, however, that the Convention may uncover additional safety problems that require attention. As a result, the countries with the most acute safety problems may seek to use the Convention process as leverage to obtain additional nuclear safety assistance. We provided copies of a draft of this report to NRC for its review and comment. NRC obtained and consolidated additional comments from the departments of State and Energy. On December 23, 1996, we met with NRC officials, including the Director, Office of International Programs, and State’s Director, Nuclear Energy Affairs, to discuss their comments. In general, these officials agreed with the facts and analysis presented. They gave us additional clarifying information, and we revised the text as appropriate. The officials noted that the Convention is fairly well developed because of the significant amount of work already done by various countries’ representatives during several preliminary meetings. In their opinion, it is very important that the United States ratify the Convention before the April 1997 preparatory meeting in order to (1) shape the peer review process to create the most rigorous and systematic analysis of the self-assessment reports, (2) keep the implementation costs as low as possible, and (3) use the United States’ diplomatic and political strength to make the Convention an integral component of a network of binding international legal instruments that enhance global safety. We also provided IAEA with a copy of the draft report. In its comments, IAEA, including the Deputy Director General for Nuclear Safety, suggested some technical revisions to the text, which we incorporated as appropriate. IAEA noted that the April 1997 preparatory meeting will provide countries with the opportunity to decide on the review process and factors that will determine the costs to implement the Convention. IAEA also views the Convention as a major accomplishment that will assist in achieving and maintaining a high level of safety worldwide. In its view, the Convention will provide for a degree of openness about national safety programs that has not existed in the past. To obtain information on how the Convention will be reviewed for compliance, we examined relevant parts of the Convention and interviewed agency officials from the Department of State, DOE, and NRC and other officials knowledgeable about international agreements from the Congressional Research Service, Georgetown University Law Center, and New York University. We also discussed the Convention with officials from IAEA, including the Director General, the Deputy Director General for Nuclear Safety, and the Senior Legal Officer. These matters were also discussed with officials from the U.S. Mission to the United Nations System Organizations, Vienna, Austria, and the Nuclear Energy Institute, Washington, D.C. We also reviewed relevant documentation provided by these agencies and officials. To identify cost information, we obtained cost data from the Department of State, DOE, and NRC. We also obtained data developed by IAEA’s Division of Nuclear Safety. We did not independently verify the accuracy of these data. We performed our review from October 1996 through December 1996 in accordance with generally accepted government auditing standards. Copies of this report are being sent to the Secretaries of State and Energy, the Chairman of NRC, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others on request. Please call me at (202) 512-3600 if you or your staff have any questions. Major contributors to this report are listed in appendix III. Number of operating civil nuclear power reactors(continued) This appendix provides information on the costs that have been or may be incurred by the Nuclear Regulatory Commission (NRC), the Department of State, and the Department of Energy (DOE) in implementing the Convention on behalf of the United States. NRC, State, and DOE estimated together they could spend about $1.1 million in travel and salary and benefit costs to prepare for and participate in the first review meeting, which is scheduled to take place no later than April 1999. This amount—based on the number of NRC staff needed to prepare for and attend meetings—represents a higher-range estimate of a figure that could be as low as about $700,000. Jackie A. Goff, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed implementation of the Convention on Nuclear Safety, focusing on: (1) how compliance with the Convention's terms and obligations will be reviewed by the ratifying countries; and (2) the potential costs to the United States to participate in the Convention. GAO found that: (1) the method to review compliance with the Convention on Nuclear Safety has not been finalized; (2) the Convention does not impose sanctions for noncompliance but seeks to encourage compliance through peer pressure; (3) the Convention relies on each ratifying country to prepare a self-assessment report of its nuclear power program; (4) these reports will, in turn, be reviewed by other member countries at periodic meetings to determine how each country is complying with the Convention; (5) the level of detail to be included in these reports has not been finalized, nor has the process by which countries will critically review these reports been fully determined; (6) as the method is currently envisioned, groups composed of five or six countries would form the core of the review process; (7) the countries with the greatest number of operating nuclear reactors, the United States, France, Japan, the United Kingdom, and Russia, would participate in separate review groups made up primarily of several other countries with operating reactors; (8) although U.S. government officials did not originally favor the country-grouping approach, they believe the United States will have adequate opportunities to review the safety programs of all countries through other mechanisms established by the Convention; (9) the costs associated with the United States' participation in the Convention have not been fully determined; (10) the Nuclear Regulatory Commission (NRC), the Department of State, and the Department of Energy have estimated that it could cost as much as $1.1 million to participate in planning meetings to develop the Convention's policies and procedures, prepare the first U.S. self-assessment report, review other countries' reports, and participate in the first review meeting; (11) other costs, a portion of which the United States will incur, associated with the International Atomic Energy Agency's administration of the Convention are less certain but could range up to $10.3 million through the first review meeting, according to a 1993 estimate; (12) NRC officials believe, however, that the actual costs will be significantly less, about $1 million to administer the first review meeting; and (13) the costs for subsequent review meetings have not been estimated. |
Oversight of nursing homes is a shared federal-state responsibility. Based on statutory requirements, CMS defines standards that nursing homes must meet to participate in the Medicare and Medicaid programs and contracts with states to assess whether homes meet these standards through annual surveys and complaint investigations. A range of statutorily defined sanctions is available to CMS and the states to help ensure that homes maintain compliance with federal quality requirements. CMS also is responsible for monitoring the adequacy of state survey activities. Every nursing home receiving Medicare or Medicaid payment must undergo a standard survey not less than once every 15 months, and the statewide average interval for these surveys must not exceed 12 months. During a standard survey, separate teams of surveyors conduct a comprehensive assessment of federal quality-of-care and fire safety requirements. In contrast, complaint investigations generally focus on a specific allegation regarding resident care or safety. The quality-of-care component of a survey focuses on determining whether (1) the care and services provided meet the assessed needs of the residents and (2) the home is providing adequate quality care, including preventing avoidable pressure sores, weight loss, and accidents. Nursing homes that participate in Medicare and Medicaid are required to periodically assess residents’ care needs in 17 areas, such as mood and behavior, physical functioning, and skin conditions, in order to develop an appropriate plan of care. Such resident assessment data are known as the minimum data set (MDS). To assess the care provided by a nursing home, surveyors select a sample of residents and (1) review data derived from the residents’ MDS assessments and medical records; (2) interview nursing home staff, residents, and family members; and (3) observe care provided to residents during the course of the survey. CMS establishes specific investigative protocols for state survey teams—generally consisting of RNs, social workers, dieticians, and other specialists—to use in conducting surveys. These procedural instructions are intended to make the on-site surveys thorough and consistent across states. The fire safety component of a survey focuses on a home’s compliance with federal standards for health care facilities. The fire safety standards cover 18 categories ranging from building construction to furnishings. Examples of specific requirements include the use of fire- or smoke- resistant construction materials, the installation and testing of fire alarms and smoke detectors, and the development and routine testing of a fire emergency plan. Most states use fire safety specialists within the same department as the state survey agency to conduct fire safety inspections, but about one-third of states contract with their state fire marshal’s office. Complaint investigations provide an opportunity for state surveyors to intervene promptly if problems arise between standard surveys. Complaints may be filed against a home by a resident, the resident’s family, or a nursing home employee either verbally, via a complaint hotline, or in writing. Surveyors generally follow state procedures when investigating complaints but must comply with certain federal guidelines and time frames. In cases involving resident abuse, such as pushing, slapping, beating, or otherwise assaulting a resident by individuals to whom their care has been entrusted, state survey agencies may notify state or local law enforcement agencies that can initiate criminal investigations. States must maintain a registry of qualified nurse aides, the primary caregivers in nursing homes, that includes any findings that an aide has been responsible for abuse, neglect, or theft of a resident’s property. The inclusion of such a finding constitutes a ban on nursing home employment. Deficiencies identified during either standard surveys or complaint investigations are classified in 1 of 12 categories according to their scope (i.e., the number of residents potentially or actually affected) and their severity. An A-level deficiency is the least serious and is isolated in scope, while an L-level deficiency is the most serious and is considered to be widespread in the nursing home (see table 1). States are required to enter information about surveys and complaint investigations, including the scope and severity of deficiencies identified, in CMS’s OSCAR database. Ensuring that documented deficiencies are corrected is a shared federal- state responsibility. CMS imposes sanctions on homes with Medicare or dual Medicare and Medicaid certification on the basis of state referrals. CMS normally accepts a state’s recommendation for sanctions but can modify it. The scope and severity of a deficiency determine the applicable sanctions, which can involve, among other things, requiring training for staff providing care to residents, imposing money fines, denying the home Medicare and Medicaid payments for new admissions, and terminating the home from participation in these programs. States are responsible for enforcing standards in homes with Medicaid-only certification—about 14 percent of homes. They may use the federal sanctions or rely on their own state licensure authority and nursing home sanctions. CMS is responsible for overseeing each state survey agency’s performance in ensuring quality of care in nursing homes participating in Medicare or Medicaid. Its primary oversight tools are statutorily required federal monitoring surveys conducted annually in at least 5 percent of the state- surveyed Medicare and Medicaid nursing homes in each state and annual state performance reviews. Federal monitoring surveys can be either comparative or observational. A comparative survey involves a federal survey team conducting a complete, independent survey of a home within 2 months of the completion of a state’s survey in order to compare and contrast the findings. In an observational survey, one or more federal surveyors accompany a state survey team to a nursing home to observe the team’s performance. Roughly 81 percent of the approximately 800 federal monitoring surveys are observational. Performance reviews examine state survey agency compliance with seven standards: (1) timeliness of the survey, (2) documentation of survey results, (3) quality of state agency investigations and decision making, (4) timeliness of adverse action procedures, (5) budget analysis, (6) timeliness and quality of complaint investigations, and (7) timeliness and accuracy of data entry. CMS’s nursing home survey data show a significant decrease in serious quality problems in recent years, but other information indicates that this trend masks two important and continuing issues: inconsistency in how states conduct surveys and understatement of serious quality problems. OSCAR data continue to show wide interstate variability in the proportion of homes found to have serious deficiencies, suggesting inconsistency in states’ interpretation and application of federal regulations. We previously reported that confusion about the definition of actual harm contributed to inconsistency and understatement in state surveys. Moreover, although federal comparative surveys conducted from October 1998 through December 2004 showed a decline in the proportion of serious deficiencies that were not identified by state surveys, this overall trend masks a more recent increase from 2002 through 2004 in federally identified understatement of serious deficiencies. In five large states we examined with a significant decline in the proportion of homes found to have harmed residents, federal comparative surveys found that a significant proportion of state surveys had missed serious deficiencies, that is, state surveyors either failed to cite the deficiencies altogether or cited them at too low a level of scope and severity. From January 1999 through January 2005, the proportion of nursing homes nationwide with actual harm or immediate jeopardy deficiencies declined from about 29 percent to about 16 percent. Figure 1 shows the proportion of homes nationwide with these deficiencies for four consecutive time periods from January 1999 through January 2005. During the 6-year time period, 41 states had a decline in serious deficiencies ranging from about 5 to about 36 percentage points (see app. II). The nationwide data show a decline in nursing homes cited for serious deficiencies; however, the data obscure the continued significant interstate variation in the proportion of homes with serious deficiencies, which suggests inconsistency in how states conduct surveys. Table 2 shows that while 10 states identified serious deficiencies in less than 10 percent of the homes surveyed, 15 states found similar deficiencies in more than 20 percent of homes surveyed from July 2003 through January 2005. For example, during that period California identified actual harm and immediate jeopardy deficiencies in about 6 percent of the state’s nursing homes, while Connecticut found such deficiencies in approximately 54 percent of its facilities. Since January 1999, the proportion of homes with serious deficiencies had declined nearly 23 percentage points in California but increased by about 6 percentage points in Connecticut. We discussed the decline in serious deficiencies in the five large states we examined with state survey agency officials and officials from the responsible CMS regional offices. Officials in four of the five states believed that there had been some improvement in nursing home quality. CMS regional office officials, however, were concerned about the magnitude of the decline in serious deficiencies in two states—Texas and California. The Texas state survey agency noted both some improvement in quality as well as a significant number of inexperienced surveyors who it believed were hesitant in citing actual harm. The San Francisco regional office and state survey agency officials acknowledged that confusion by state surveyors as to what constituted actual harm had contributed to the decline in California. The regional office staff discussed this issue with California survey agency officials and believed that training combined with the CMS inquiries might have contributed to a recent increase in actual harm deficiency citations. The overall decline in the proportion of federal comparative surveys nationwide that noted serious deficiencies not identified by state surveyors across the three time periods we examined masks a reversal of this trend in the most recent time period analyzed, suggesting ongoing understatement of deficiencies. The time periods analyzed were October 1998 through May 2000, June 2000 through February 2002, and March 2002 through December 2004. From October 1998 through February 2002, the proportion of federal comparative surveys nationwide that noted serious deficiencies that were not identified by state surveyors declined from 34 percent to 22 percent (see fig. 2). However, federal surveys conducted from March 2002 through December 2004 that found serious deficiencies not identified by state surveyors increased from 22 percent to 28 percent. In addition, our work in the five states we examined demonstrates continued understatement by state surveyors of serious deficiencies that cause actual harm or immediate jeopardy. Because some serious deficiencies found by federal, but not state, surveyors may not have existed at the time of the state survey, CMS requires its regional offices to specifically identify on worksheets which deficiencies state surveyors had missed during the state survey. We analyzed CMS regional office worksheets for 73 comparative surveys in five large states—California, Florida, New York, Ohio, and Texas—with a significant decline in serious deficiencies from January 1999 through January 2005. Overall, 18 percent of these federal comparative surveys identified at least one serious deficiency missed by state surveyors, ranging from a low of 8 percent in Ohio to a high of 33 percent in Florida (see table 3). Table 3 also shows that in comparative surveys noting serious deficiencies that state surveyors missed, from one to seven serious deficiencies were missed. Federal surveyors’ findings of understatement of serious deficiencies are consistent with our own work. Our July 2003 report analyzed state surveys of homes with a history of harming residents but whose most current survey identified quality-of-care problems at below the level of harm; we concluded that about 40 percent of the 76 homes we analyzed had harmed residents, including instances of severe weight loss; multiple falls resulting in broken bones and other injuries; and serious, avoidable pressure sores. Similarly, our November 2004 report on Arkansas nursing home deaths found numerous instances of serious, understated quality-of-care problems. Our prior reports identified five factors that we believe contribute to inconsistency and the understatement of deficiencies by state surveyors: (1) weaknesses in CMS’s survey methodology; (2) confusion about the definition of actual harm; (3) predictability of surveys, which allows homes to conceal problems if they so desire; (4) inadequate quality assurance processes at the state level to help detect understatement in the scope and severity of deficiencies; and (5) inexperienced state surveyors due to retention problems. CMS has initiatives under way to revise the survey methodology and address the confusion about what constitutes harm, and it has taken some steps to reduce survey predictability. However, CMS did not implement the recommendation in our July 2003 report to strengthen the ability of state quality assurance processes to detect understatement. While it agreed with the intent of our recommendation, CMS indicated that its state performance standards initiative already incorporated this concept. The status of these initiatives and state workforce issues are discussed in the following section. CMS has addressed many shortcomings in nursing home survey and oversight activities both in response to our recommendations and as a result of its own assessment of needed improvements, but it is still working on key initiatives that have not yet been implemented. Appendix I provides a complete listing of our previous recommendations and the implementation status of CMS initiatives taken in response. Examples of CMS’s initiatives to address shortcomings include (1) revising the survey methodology, (2) issuing states additional guidance to strengthen complaint investigations, (3) implementing immediate sanctions for homes cited for repeat serious violations, and (4) strengthening oversight by conducting assessments of state survey activities. CMS also has published information on its Web site about nursing home quality and has engaged independent quality organizations to work with nursing homes to improve quality. Despite CMS’s initiatives in four distinct areas—surveys, complaints, enforcement, and oversight—some initiatives either have not effectively targeted the problems we identified or have shortcomings that impair their effectiveness. Several CMS initiatives are intended to address shortcomings in the survey process, but most of these initiatives are in the developmental stage and have not yet been implemented. In addition, despite CMS’s efforts to make scheduling of surveys less predictable, many remain predictable. (See table 4). In response to our 1998 recommendation to improve the rigor of the survey methodology to help ensure that surveyors do not miss significant care problems, CMS took some interim steps and launched a longer-term initiative. As interim steps, CMS instructed state survey agencies in 1999 to (1) increase the sample of residents reviewed during surveys and (2) review available quality indicator information on the care provided to a home’s residents before actually visiting the home. By using the quality indicators, which are essentially numeric warning signs of the prevalence of care problems, to select a preliminary sample of residents before the on-site review, surveyors are better prepared to target their surveys and to identify potential care problems. Surveyors augment the preliminary sample with additional resident cases once they arrive in the home. For the longer term, CMS awarded a contract in 1998 to revise the methodology used to survey nursing homes, and the agency plans to pilot this new methodology in the fall 2005. Under development for 7 years, the proposed two-stage, data-driven Quality Indicator Survey (QIS) is intended to systematically target potential problems at nursing homes. Its expanded sample should help surveyors better assess the scope of any deficiencies identified. In stage 1, a large resident sample will be drawn and relevant data from on- and off-site sources will be analyzed to develop a set of quality-of-care indicators, which will be compared to national benchmarks. Stage 2 will systematically investigate potential quality-of- care concerns identified in stage 1. In June 2005, CMS selected five states to pilot test the new survey methodology. The QIS pilot test will begin during the fall 2005, with a final evaluation of the pilot due in the fall 2006. The evaluation will examine the QIS’s cost-effectiveness, focusing on the time and surveyor team size required under QIS compared to the current survey methodology, and on the QIS’s impact on deficiency citations. In developing the QIS, CMS has attempted to prevent increases in the time required to complete surveys. Depending on evaluation findings and any subsequent streamlining of the QIS, national implementation could begin in mid-2007. Since 2001, CMS has been developing surveyor investigative protocols to ensure greater rigor in on-site investigations of specific quality-of-care areas. We recommended in July 2003 that CMS finalize the development of these important protocols; however, CMS is still working on this initiative. In 2001, CMS hired a contractor to facilitate the convening of expert panels for the development and review of these protocols. In November 2004, more than 1 year later than scheduled, CMS implemented a protocol on pressure sores. Since then, CMS has implemented protocols in two other areas—incontinence and medical director qualifications and responsibilities. The protocols provide detailed interpretive guidelines and severity guidance. Protocols in seven more areas are under development, with an issuance target of fall 2005. To promote increased consistency among states in deficiency citations, a work group of CMS central office, regional office, and state survey agency staff was convened in early 2005 to clarify the definitions of actual harm and immediate jeopardy. Our July 2003 report noted that confusion about the definitions contributed to the understatement of serious deficiencies. According to CMS, the 2005 draft revised definition of actual harm attempts to clarify the existing definition by eliminating confusing language and identifying indicators and examples of actual harm. The draft revised definition of immediate jeopardy is intended to provide additional guidance on documenting whether deficiencies are at the immediate jeopardy severity level, including criteria for identifying whether immediate jeopardy exists, and updates examples of immediate jeopardy. A CMS official indicated that the draft revised definition of immediate jeopardy stresses that action must be taken at once to prevent harm. As of August 2005, CMS had no target issuance date for the revised definitions. CMS is implementing two additional survey initiatives—developing guidance to ensure surveyors are able to report concerns to CMS regional offices and studying surveyors’ use of photographic evidence. To address anecdotal reports that surveyors are sometimes asked to overlook or downgrade survey findings, CMS has issued and is obtaining state comments on draft guidance to ensure that surveyors can cite survey findings without such inappropriate pressure. Currently, surveyors report concerns to the state survey agency. CMS officials indicated that the draft guidance tries to (1) establish a nonthreatening option for voicing concerns to CMS regional office staff without overburdening the regional offices with additional investigations and (2) give CMS a way to identify any patterns of problems. Implementation of this effort is anticipated in late 2005. CMS also contracted for a study of the use of photographic evidence by surveyors to support survey findings. In our 2004 report on Arkansas nursing home deaths, we reported that photographs taken by coroners provided key evidence supporting neglect of nursing home residents and the existence of serious, avoidable care problems. The goal of CMS’s study is to identify issues and develop training materials related to surveyors’ use of photographic evidence. This study began in the summer 2005, with final training materials to be issued in the summer 2006. In 1998, we reported that nursing homes could mask certain deficiencies if they chose to because of survey predictability. CMS responded by directing states to (1) avoid scheduling a home’s survey for the same month of the year as the home’s previous standard survey and (2) begin at least 10 percent of standard surveys outside the normal workday (either on weekends, early in the morning, or late in the evening). However, our current analysis showed that a significant proportion of state nursing home surveys remain predictable. We consider surveys to be predictable if they are conducted within 15 days of the anniversary of a home’s prior survey. From 2002 to 2005, the proportion of predictable surveys increased from 13 percent to 14.5 percent (see app. III). Overall, 29 states had an increase in survey predictability. As shown in table 5, as of July 2005, from 10 percent to over 50 percent of current nursing home surveys in 35 states were conducted within 15 days of the anniversary of a home’s last standard survey. CMS officials stated that avoiding surveys close to the 12-month anniversary of a home’s prior survey, while meeting the requirements that surveys occur not less than once every 15 months and maintaining a statewide average interval of 12 months, could require increased funding because more surveys would need to be accomplished within the first 9 months after a survey. However, CMS noted that states are not currently funded to conduct surveys within the first 9 months after the previous survey. CMS officials also told us that CMS had introduced the ASPEN Scheduling and Tracking (AST) module for its central and regional offices and the states in February 2004 as a tool to reduce survey predictability; however, state officials we spoke with about AST were unfamiliar with its survey predictability features. CMS has completed certain initiatives to ensure that quality problems found during complaint investigations are promptly addressed and has taken steps to address weaknesses in the notification and investigation of abuse in nursing homes. CMS is continuing work on (1) ensuring state compliance with federal nurse aide registry requirements and (2) assessing the effectiveness of conducting employee background checks. (See table 6). CMS guidance issued since 1999 has helped to strengthen state procedures for investigating complaints. In 1999, we reported that complaints alleging that nursing home residents were being harmed were not being investigated for weeks or months in several states and recommended that CMS develop additional standards for the prompt investigation of serious complaints alleging situations that may harm residents but are categorized as less than immediate jeopardy. CMS promptly instructed states to investigate complaints alleging harm to a resident within 10 workdays of receiving the complaint and later specified that investigations of these complaints be conducted on-site at the nursing home. During 1999, CMS developed and issued guidance intended to help states identify complaints that allege harm to residents. Also in 1999, CMS hired a contractor to study and recommend improvements to state complaint practices. CMS used the findings of this study to develop more detailed guidance for states to help improve the effectiveness of complaint investigations. In 2004, CMS issued this guidance to states, which further clarified the 1999 instructions on identifying actual harm. In March 2002, we recommended that CMS ensure that state survey agencies immediately notify local law enforcement agencies or Medicaid Fraud Control Units (MFCU) of allegations or confirmed complaints of abuse. In response, CMS issued a March 2002 letter to CMS regional offices and state survey agencies clarifying its policies on abuse reporting time frames, requirements for reporting to local law enforcement and/or the MFCU, displaying complaint telephone numbers, and citing abuse on surveys. CMS issued additional guidance in December 2004 clarifying nursing home reporting requirements and definitions for alleged violations, including mistreatment, neglect, abuse, injuries of unknown source, and misappropriation of resident property. CMS has not, however, implemented our March 2002 recommendation to accelerate the agency’s campaign to increase public awareness of nursing home abuse through the development and distribution of posters that are to be prominently displayed in nursing homes, and other materials. CMS has taken three important steps to improve its oversight of state complaint investigations, including allegations of abuse. First, it required in its annual state performance review, which was established in fiscal year 2001 and fully implemented in fiscal year 2002, that federal surveyors review a sample of complaints in each state to determine whether states properly categorize complaints (i.e., determine how quickly they should be investigated), investigate complaints within the time specified, and properly include the results of investigations in CMS’s database. Our March 1999 report on complaints had recommended that CMS strengthen its oversight in these areas. During its 2004 review of state performance, CMS identified 5 states that did not meet the standard for properly categorizing complaints and 13 states that did not conduct timely investigations of all complaints alleging immediate jeopardy to residents; however, 11 of the 13 states missed the requirement by a small margin. States failing state performance review standards are asked to submit a corrective action plan to CMS. Second, in January 2004, CMS implemented a new national automated complaint tracking system, the ASPEN Complaints and Incidents Tracking System. Our March 1999 report on enforcement noted that the lack of a national complaint reporting system hindered CMS’s and states’ ability to adequately track the status of complaint investigations as well as CMS’s ability to maintain a full compliance history on each nursing home. To address these concerns, we recommended the development of a better management information system. One goal of CMS’s new management information system is to standardize reported complaints so that analysis can be conducted across all states. This system is intended to provide CMS with an effective tool for overseeing and managing state complaint investigations. Third, in November 2004, CMS requested state survey agency directors to self-assess their states’ compliance with federal requirements for maintaining and operating nurse aide registries, to which states are required to report substantiated findings of abuse, neglect, or theft of nursing home residents’ property by nurse aides. CMS has not issued a formal report of findings from the state self-assessment, but CMS officials noted that as a result of resource constraints some states reported having difficulty maintaining compliance with certain federal requirements, such as (1) timely entry by state survey staff of information in nurse aide registries and (2) state notification to nursing homes employing nurse aides found guilty of abuse at another facility. In our March 2002 report, we recommended that CMS shorten the state survey agencies’ time frames for determining whether to include findings of abuse in the nurse aide registry. Annotations to nurse aide registries are made after final determinations that abuse occurred, which entail completion of the state’s investigation as well as adjudication of any appeals. Until the final determination, residents may continue to be exposed to aides who are allegedly abusive. CMS noted that while most of the time frames are defined in regulation, it can review the time frames when regulatory changes are considered. No changes to the regulations had been made as of August 2005. As part of its third effort, CMS also is conducting a Background Check Pilot Program. Our March 2002 report recommended an assessment of state policies and practices for complying with federal requirements prohibiting employment of individuals convicted of abusing nursing home residents. The pilot program will test the effectiveness of state and national fingerprint-based background checks on employees of long-term care facilities, including nursing homes. Pilot programs in seven states— Alaska, Idaho, Illinois, Michigan, Nevada, New Mexico, and Wisconsin— will be phased in from fall 2005 through September 2007. An independent evaluation is planned. CMS significantly strengthened the potential deterrent effect of enforcement actions by requiring immediate sanctions for homes found to have a pattern of harming residents. Moreover, CMS continues to develop new policies and to clarify existing ones in order to strengthen enforcement activities and encourage nursing home compliance with federal requirements. (See table 7). Responding to our July 1998 recommendation to eliminate grace periods for homes cited for repeat serious violations, CMS began a two-stage phase-in of a new enforcement policy. In the first stage, effective September 1998, CMS required states to refer for immediate sanction homes found to have a pattern of harming residents or of exposing them to actual harm or potential death or serious injury (H-level deficiencies and above on CMS’s scope and severity grid). Effective January 2000, CMS expanded this policy, requiring referral of homes found to have harmed one or a small number of residents (G-level deficiencies) on successive standard surveys. In response to our 2003 finding that states failed to refer a substantial number of homes that met the criteria for the immediate sanctions, CMS initiated oversight of state compliance with this policy. To conduct this oversight, CMS analyzed deficiency data for 2000 through 2003 to identify potential instances of homes that should have been but were not referred for immediate sanctions. In ongoing work, we are assessing the impact and implementation of the immediate sanctions policy. Based on recommendations in our July 1998 report and our March 1999 report on enforcement, CMS has addressed weaknesses in its policies in three areas: nursing homes’ correction of deficiencies, the nursing home appeals process, and the enforcement data tracking system. CMS now requires on-site follow-up, referred to as a revisit, of homes with substandard quality of care or actual harm or higher-level deficiencies until the state verifies correction of each deficiency cited. Our 1998 report found that CMS’s policy of allowing nursing homes to self-report resumed compliance was sometimes inappropriately applied to homes with deficiencies in the immediate jeopardy category or that were found to have substandard quality of care. We recommended that CMS require that for homes with recurring serious violations, state surveyors substantiate resumed compliance by means of an on-site revisit. CMS also has issued additional guidance on the “reasonable assurance period” during which terminated homes must demonstrate that they have corrected the deficiencies that led to their terminations. This guidance provided additional examples of reasonable assurance decisions. CMS and the Department of Health and Human Services (HHS) requested and received funding and staffing increases for the HHS Departmental Appeals Board in fiscal years 1999 and 2000 to address our March 1999 finding that the growing backlog of appeals hampered the effectiveness of civil money penalties by delaying their collection. The Board is responsible for adjudicating the appeals. By August 2003, the backlog of appeals of civil money penalties had been significantly reduced. CMS implemented the automated ASPEN Enforcement Manager on October 1, 2004, to facilitate tracking of enforcement actions. Prior to implementing this system, CMS had no centralized system for tracking or managing federal and state enforcement actions. The ASPEN Enforcement Manager is intended to provide real-time entry and tracking of enforcement actions, issue monitoring alerts, generate enforcement letters, and facilitate analysis of enforcement patterns. CMS expects that ASPEN Enforcement Manager data will enable states, CMS regional offices, and the CMS central office to more easily track and evaluate nursing home performance and compliance status as well as respond to emerging issues. In ongoing work, we are assessing whether data from the ASPEN Enforcement Manager can be used to analyze nursing homes’ deficiency and enforcement histories. In December 2004, CMS revised the method for selecting nursing homes for the Special Focus Facility Program to ensure that the most poorly performing homes were included in the program and to strengthen enforcement for those nursing homes with an ongoing pattern of substandard care. For this program, first initiated in January 1999, states were directed to select two nursing homes to be special focus facilities, conduct two standard surveys each year in the special focus facilities, and submit monthly status reports on the selected homes. The revised guidance directs states to select, from an expanded list of facilities, a minimum of up to six nursing homes, depending on the number of nursing homes in the state; the revised guidance gives states the option to select more than the minimum. States are also given the flexibility to remove from the list homes that have made significant improvements. Enforcement authority over special focus facilities has been strengthened so that while homes are in the Special Focus Facility Program, immediate sanctions must be imposed if homes fail to significantly improve performance from one survey to the next; termination from participation in Medicare and Medicaid is required for homes with no significant improvement in 18 months and three surveys. In April 2004, CMS launched a Civil Money Penalty Improvement Project to improve its ability to track and collect civil money penalties in an effort to make them a more effective enforcement tool. CMS mapped out the current process for tracking and collecting civil money penalties to identify weaknesses and developed draft guidance with detailed policies and procedures for addressing areas identified as needing improvement, with a target release date of fall 2005. Also planned are enhancements to the Civil Money Penalty Tracking System, CMS’s information system for civil money penalties. The enhancements are intended to streamline the system, improve its reporting capabilities, and improve its compatibility with the enforcement monitoring system. The system’s changes are planned to occur through 2005 and 2006. Also in 2004, CMS, in conjunction with various state survey agencies, began developing a civil money penalty grid—an optional guideline for use by states and CMS regional offices to help ensure greater consistency across states in the amounts of civil money penalties recommended. The grid is expected to provide ranges for minimum civil money penalties for deficiencies, while allowing for flexibility to adjust the penalties on the basis of factors such as the severity of an identified deficiency, the care areas in which deficiencies were cited, and past history of noncompliance. The target issuance date for a draft grid was August 2005. In October 2005, CMS issued a revised past noncompliance policy that (1) clarifies how to address recently identified past deficiencies, (2) further defines “past noncompliance,” (3) eliminates the use of the term “egregious,” and (4) clarifies the methods for determining whether past noncompliance has been corrected. Past noncompliance occurs when a current survey reveals no deficiencies but determines that an egregious violation of federal standards occurred in the past and was not identified during an earlier survey. In November 2004, we reported that CMS’s past noncompliance policy was ambiguous. The policy did not define what constituted an egregious violation or relate egregious violations to its scope and severity grid. Moreover, the policy did not hold homes accountable for negligence associated with resident deaths unless current residents are experiencing the same quality-of-care problems and it obscures the nature of care problems. CMS’s revised policy responds to our recommendation and holds homes accountable for all past noncompliance resulting in harm to residents. We also recommended that past noncompliance citations identify the specific nature of the care problem in the OSCAR database and on the Nursing Home Compare Web site. In 2007, CMS plans to enhance the information on the Nursing Home Compare Web site to include the specific nature of the past noncompliance. According to CMS officials, the delay is related to the implementation of higher priority initiatives by the agency. Currently, the Web site only indicates whether there were instances of past noncompliance and does not identify the nature of the care deficiency. CMS has significantly improved the intensity and scope of its oversight activities and has made significant improvements both in its data systems and in its analysis and use of the data it collects on survey activities. The effectiveness of several of these oversight initiatives, however, is uneven, and more work remains to be done. (See table 8). In response to recommendations in our November 1999 and July 2004 reports, CMS has (1) significantly increased the number of federal comparative surveys both for quality of care and fire safety and (2) decreased the time between the end of the state survey and the start of the federal survey for quality-of-care comparative surveys, allowing CMS to better distinguish between serious problems missed by state surveyors and changes in a home that occurred after the state survey. We found earlier that CMS was making negligible use of comparative surveys, its most effective tool for assessing a state survey agency’s ability to identify serious quality-of-care and fire safety deficiencies in a nursing home, to fulfill its 5 percent monitoring mandate. Only 21 quality-of-care comparative surveys were conducted from November 1996 through October 1998. Our 2004 fire safety report found that CMS had conducted only 40 fire safety comparative surveys in fiscal year 2003, ranging from 4 in some states to none in others. Since 2001, CMS has required its regional offices to complete at least two quality-of-care comparative surveys per state per year, but federal surveyors have been exceeding this minimum threshold. During the period March 1, 2002, through December 31, 2004, CMS completed 424 comparative surveys, about 140 per year. In addition, the average elapsed time between state and comparative surveys has decreased from 33 calendar days for the 64 comparative surveys we reviewed in 1999 to 26 calendar days for the 424 surveys completed through 2004. CMS planned to further increase the number of comparative surveys by contracting in the fall of 2003 for 170 quality-of-care comparative surveys in addition to those conducted by federal surveyors. However, an increase in the number of quality-of-care comparative surveys is unlikely because of delays in contractor readiness and the addition of fire safety comparative surveys to the contract. CMS had expected to have a sufficient number of contract surveyors trained and available to start surveys by the winter of 2005, but it took longer than anticipated to train the new surveyors. In addition, CMS modified the contract to include fire safety comparative surveys. In fiscal year 2005, the contractor conducted 34 quality-of-care comparative surveys and 250 fire safety comparative surveys. Together, the contractor and CMS regional offices conducted a total of 859 fire safety comparative surveys in fiscal year 2005. CMS also is using the contract surveyors to augment federal survey teams. According to CMS, it will use contract funds carried over from earlier years to conduct quality-of-care comparative surveys during fiscal year 2006, and will only use fiscal year 2006 funds to conduct fire safety comparative surveys. In response to a recommendation in our July 2004 report to strengthen fire safety standards, CMS published an interim final rule in March 2005 requiring nonsprinklered nursing homes to install battery-powered smoke detectors in resident rooms and common areas, including resident dining, activity, and meeting rooms. Previously, federal standards required smoke detectors in (1) corridors or resident rooms only in homes built after 1981 and (2) nonsprinklered resident rooms containing furniture brought from the resident’s home. We reported that the lack of smoke detectors in resident rooms may delay staff response and fire department notification, which in turn may increase the number of nursing home fire-related fatalities. CMS will begin surveying nursing homes’ compliance with the new requirement in May 2006. In October 2000, CMS regional offices began conducting on-site state performance reviews to assess compliance with federal standards. Previously, CMS permitted states to evaluate and report on their own performance against a number of standards, a technique that essentially allowed states to write their own report cards because CMS did not independently validate information provided by the states. In fiscal year 2005, CMS began to tie funding increases for state survey agencies to one of the seven performance standards—the timely conduct of standard surveys—time frames that are established in federal statute. Nevertheless, in our current analysis of the standard that is intended to measure the supportability of survey findings, we found that three key issues we identified in July 2003 still exist. First, distinctions in state performance were hard to identify because, while some states have consistently met the standard for documentation of deficiencies, federal comparative surveys completed during essentially the same time frame found that surveyors in these states frequently missed serious deficiencies. Second, CMS regional offices were inconsistent in conducting state performance reviews. For fiscal year 2004, five states nationwide did not meet this standard, but three of the five states were in one CMS region. Third, the standard for assessing the supportability of deficiencies is composed of 11 elements that mix major and minor issues. Although CMS has simplified the standard for assessing the supportability of deficiencies, we believe that many of the elements reviewed remain essentially administrative in nature rather than substantive. Of the elements that make up the standard, only 2 assess the appropriateness of the cited scope and severity; the remaining elements assess such issues as how the deficiency is written, including avoiding the use of the passive voice. We do not believe that this standard is sufficiently focused on identifying understatement. CMS did not implement our July 2003 recommendation that it require states to review a sample of deficiencies cited at or below the level of actual harm in order to detect understatement because, according to CMS, the state performance review of the supportability of deficiencies already accomplished this objective. In discussing our current findings regarding the standard intended to measure the supportability of survey findings, CMS officials agreed that (1) measuring the quality of state surveys, one goal of reviewing the supportability of deficiencies, was particularly challenging because there is no one agreed-upon way to measure quality; and (2) some standards are complex, contributing to consistency problems. In developing this report, we also noted two additional problems with the state performance reviews that were not previously reported. First, in its fiscal year 2004 review, CMS began combining state performance review results across the different provider types, such as nursing homes and home health agencies, for which states have oversight responsibility. For example, CMS calculates one overall state score on the supportability of deficiencies across provider types, rather than issuing provider-specific scores. One CMS region suggested that because nursing homes are generally surveyed by a unique pool of surveyors, combining results in this manner limits the usefulness of the feedback to state survey agencies. Second, CMS provides feedback to states regarding their performance each year, but it does not publicly report the results. Doing so would appear to be consistent with CMS’s stated philosophy of sharing information with the public to help improve nursing home quality. CMS has pursued important upgrades in the system used to track the results of state survey activities and has increased its analysis of OSCAR and other data to improve oversight by CMS central and regional offices and state survey agencies. Examples include the following: In 2000, CMS began to produce 19 periodic reports to monitor both state and regional office performance. Some reports, such as survey timeliness, are used during state performance reviews, while others are intended to help identify problems or inconsistencies in state survey activities and the need for intervention. In 2001, 2002, and 2005 CMS published a “Nursing Home Data Compendium,” which includes detailed tables and figures on nursing homes, resident demographics, resident clinical characteristics, and survey results. In 2004, CMS commissioned a series of “White Papers” on topics ranging from enforcement to resource issues. The goal was to stimulate discussion among key stakeholders and generate ideas for “next steps” to help mitigate problems. The reports, authored by CMS and state survey agency staff, relied on data analysis from OSCAR and other CMS databases. In 2004, CMS prepared an internal study on enforcement trends since the imposition of the immediate sanctions policy using data from the Enforcement Tracking System. In 2005, CMS unveiled a Web site for use by regional offices and state survey agencies that generates a series of standard reports through a software program called Providing Data Quickly; this software permits easier access to the data contained in OSCAR. One such report identifies homes that have repeatedly harmed residents and meet the criteria for imposition of immediate sanctions. CMS indicated that it is continuing to make progress in redesigning the OSCAR system. In our March 1999 report on enforcement, we recommended that the agency develop an improved management information system that would help it to track the status and history of deficiencies, integrate the results of complaint investigations, and monitor enforcement actions. Although the target implementation date for the redesigned system has slipped from 2005 to 2008, depending on competing priorities and available funding, CMS has implemented two key components of the redesigned system—a complaint tracking system and a system to track the status of enforcement actions. Both systems are intended to provide CMS with critical management capabilities that it previously lacked. Using market forces to help drive quality improvement is an important CMS objective behind sharing data with the public on nursing home quality. Since CMS launched Nursing Home Compare in 1998, the agency has progressively expanded the information available on this Web site. In addition to data on the deficiencies identified during standard surveys, the Web site now includes data on the results of complaint investigations, information on nursing home staffing levels, and quality indicators, such as the percentage of residents with pressure sores. However, CMS continues to address ongoing problems with the accuracy and reliability of the underlying data, such as the MDS, quality indicators, and nurse staffing levels. In February 2002, we concluded that CMS efforts to ensure the accuracy of the underlying MDS data used to calculate the quality indicators (1) relied too much on off- site review activities by its contractor and (2) anticipated on-site reviews in only 10 percent of its data accuracy assessments, representing fewer than 200 of the nation’s nursing homes. CMS did not concur with our recommendation that it reorient its review program to complement ongoing state MDS accuracy efforts as a more effective and efficient way to ensure MDS data accuracy. CMS commented that its efforts already provided adequate oversight of state activities and complemented state efforts. In April 2005, CMS ended work under its data assessment and verification contract because of cost concerns, but signed a new contract in September 2005 that focuses on on-site reviews of MDS accuracy. According to CMS officials, the on-site reviews were more effective in identifying discrepancies because the reviewers were able to find more information on-site that conflicted with the nursing homes’ assessments. In November 2002, CMS began reporting on its Web site quality indicator data for each nursing home nationwide that participates in Medicare and Medicaid, even though our October 2002 report concluded that such reporting was premature given serious questions about the sufficiency of CMS efforts to validate the quality indicators and improve the accuracy of the underlying data. CMS disagreed with our recommendation to postpone its scheduled November 2002 public reporting of the data until these problems were addressed. Since 2002, however, CMS has taken steps to address the questions we raised about the validity of quality indicators. For example, CMS dropped certain quality indicators that it found were not sufficiently reliable for public reporting, such as the facility-adjusted profile prevalence of pressure sores. In addition, CMS worked with the National Quality Forum to address measurement problems with the pressure sore quality indicator by developing separate indicators for short- and long-term nursing home residents; these new indicators were added to the Web site in January 2004. A weight loss quality indicator also was developed and added to the Web site in November 2004. Our October 2002 report had noted the potential for consumer confusion in interpreting and using quality indicator data. CMS conducted consumer testing of new language and displays on Nursing Home Compare during the summer of 2004. Although nursing home staffing data have been available on the Nursing Home Compare Web site since June 2000, a CMS official told us that the agency has been aware of problems with these self-reported data since the late 1990s. This official stressed that, despite problems, they were the only available data on nursing home staffing. Examples of erroneously reported data include facilities with no nurse staffing hours or hours equal to thousands of residents per day. In addition, the staffing data do not address important issues such as turnover or retention. As a temporary fix, CMS developed edits that examine staffing ratios to determine whether any facility falls above or below certain thresholds and, effective July 2005, temporarily excluded the questionable staffing data from Nursing Home Compare until they can be corrected or confirmed. To address this issue, CMS is considering a proposal for a new system that relies on nursing home payroll data. If approved, such a system could take 3 to 4 years to implement because of the need to solicit and consider public comment and to develop software to transmit the staffing data. CMS’s initiative to include quality indicator data on its Nursing Home Compare Web site also established a new role for Quality Improvement Organizations (QIO) with regard to nursing homes. From 2002 through 2005, QIOs worked intensively with at least 10 percent of nursing homes in each state to improve quality. Although we have not evaluated QIO nursing home quality improvement activities, CMS’s preliminary analyses indicate that the QIO program has helped to reduce the use of daily physical restraints, increased management and treatment of pain, and reduced the incidence of delirium among post-acute-care residents. However, less progress has been made in decreasing the prevalence of pressure sores, according to CMS’s analyses. In August 2004, the QIO and state survey agency in 18 states launched a new pilot program. Working together, they identified from one to five nursing homes per state that had significant quality problems. The QIO then worked with these homes to help them redesign their clinical practices. According to CMS, the results of this pilot indicated that these historically “troubled” nursing homes had dramatically improved their clinical quality and decreased their quality-of- care survey deficiencies. In 2005, the QIOs’ role with nursing homes was extended for an additional 3 years, and QIOs will continue to focus on statewide improvement in four areas—pressure sores, physical restraints, pain management, and depression. In addition, QIOs will help nursing homes set individual targets for quality improvement, implement and document process-related clinical care, and assist in the development of a more resident-focused care model. QIO expenditures on nursing home quality improvement for the period of August 2002 through July 2008 are expected to total about $216 million. CMS has taken certain actions to maximize the experience and resources of state survey agencies as well as the CMS central and regional offices to improve nursing home oversight. Specifically, in 2004, CMS convened an internal Long-Term Care Task Force and charged it with providing guidance on and coordinating long-term care efforts within CMS and included representation across the agency’s divisions and the regional offices. Also in 2004, CMS began an effort to collect and disseminate nursing home survey and certification best practices developed by professional associations, universities, and federal agencies. Through the best practices effort, CMS plans to share successful strategies used by states and regional offices in a broad range of issues affecting survey and certification of nursing homes, such as surveyor recruitment and complaint intake. A contractor will identify, research, and document best practices, which CMS plans to post on its Web site. One of the issues the best practices effort will address is surveyor recruitment initiatives underway in states. As of August 2005, these best practices had not been published on the CMS Web site. CMS, states, and nursing homes face a number of key challenges in their efforts to further improve nursing home quality and safety, including (1) the cost of retrofitting older nursing homes with automatic sprinklers, a potentially costly requirement that has a demonstrated ability to prevent deaths in the event of a fire; (2) continuing problems in hiring and retaining qualified surveyors, a factor that states indicated can contribute to variability in the citation of serious deficiencies; and (3) an increasing federal and state survey workload due to increased oversight, the identification over time of additional initiatives, and growth in the number of Medicare and Medicaid providers that must be surveyed, including expected growth in nursing homes. The increased workload has created competition for both staff and financial resources and required the establishment of priorities, which may have contributed to delays in developing and implementing several key quality initiatives, such as the implementation of a more rigorous survey methodology. Although the substantial loss of life in two 2003 nursing home fires could have been reduced or eliminated by the presence of properly functioning automatic sprinkler systems, cost has been an impediment to CMS’s requiring them for all homes nationwide. Newly constructed homes must incorporate sprinkler systems; however, older homes constructed with noncombustible materials that have a certain minimum ability to resist fire are not required to install sprinklers. We previously reported that cost has been a barrier to requiring sprinklers for all older nursing homes. In July 2005, the National Fire Protection Association (NFPA) voted to require retrofitting of older homes with sprinklers, a requirement that will become a part of the 2006 edition of the NFPA code. Anticipating this action, CMS indicated that it has been developing a notice of proposed rule making, the first step in adopting the NFPA requirement for all homes that serve Medicare and Medicaid beneficiaries. A CMS official stated that the agency plans to issue the notice in March 2006 and after reviewing public comments, it will publish a final version of the rule and stipulate an effective date for homes to come into compliance. One issue that remains unresolved is how much time older homes will be given to install sprinklers. As we reported in 2004, industry officials believe that a transition period must be considered for homes to come into compliance and to determine how to pay for the cost of installing sprinklers. Rather than proposing a phase-in period, the proposed rule will request input on how much time homes should be given to come into compliance with the requirement. According to CMS, a longer phase-in period could help alleviate concerns about the cost of retrofitting homes with sprinklers. Based on our recommendation, CMS collected data on the sprinkler status of homes nationwide and found that about 21 percent of nursing homes are unsprinklered or partially sprinklered. Although CMS has not completed its cost analysis, the agency believes that the costs associated with the retrofit will be less than the industry’s $1 billion estimate. The hiring and retention of surveyors, particularly RNs, remains a major, frequently discussed issue among state survey agency directors, according to an AHFSA official, the association that represents state survey agency directors. In July 2003, we reported that the limited experience level of state surveyors because of a high turnover rate was a contributing factor to (1) variability in citing actual harm or higher-level deficiencies and (2) understatement of such deficiencies. In more than half of the 42 states that responded to our inquiry, from 30 percent to more than 50 percent of surveyors had 2 years’ experience or less, as of July 2002. Twenty-five states responded to our request for updated information on surveyor workforce issues as of July 2005. Of 23 states that provided data in both 2002 and 2005, 13 reported an improvement in 2005 (i.e., a decline in the proportion of inexperienced surveyors); 9 indicated that the situation had worsened (e.g., an increase in the proportion of inexperienced surveyors); and 1 state reported no change (see app. IV). As of July 2005, however, 20 percent or more of surveyors in 20 of the 25 states had 2 years’ experience or less (see table 9). Surveyor vacancy rates in the 25 states ranged from about 3 percent in Tennessee to 31 percent in Alabama and Florida; overall, 15 states had double-digit vacancy rates. Officials in 18 states believed that inexperienced surveyors contributed to interstate variability in the citation of serious deficiencies. One state survey agency indicated that staff attrition resulted in a workforce of less experienced surveyors who demonstrated a hesitance to cite actual harm and contributed to understatement. State survey agency officials in several states, however, suggested that the problem for less-experienced surveyors was not identifying harm but rather investigating and documenting the circumstances that led to the harm, including facility culpability, a skill that surveyors develop as they gain more experience. Because state survey agency salaries are rarely competitive with the private sector, state survey agencies told us that it is difficult to retain surveyors and to fill vacancies. RNs, a major component of states’ surveyor workforce, are in high demand and short supply, according to AHFSA. Furthermore, 9 states responding to our July 2005 inquiry indicated that state civil service requirements can make it more difficult to fill vacancies. Several of the 9 states characterized the hiring process as either cumbersome or time-consuming, or both, and 1 state noted that the process takes close to 9 months. Two states reported that they had to select candidates to interview from a certified list. One of the states indicated that the certified list often contained unqualified applicants, while the other state noted that some of the applicants were not the “best fit.” Of the 25 states, 21 indicated that they had implemented initiatives to help retain surveyors. The most popular retention strategies were to increase starting salaries and to implement flexible surveyor work schedules. For example, New York instituted a locality pay differential for New York City. While 5 of the 25 states indicated that they had a state- imposed hiring freeze, 1 state reported that budget pressures prevented it from taking steps to improve retention rates. A continuing problem cited by AHFSA is that federal funds are distributed late in the fiscal year, which does not tie into state budget cycles for approving additional positions. This problem may be particularly acute in the 5 states that reported having a hiring freeze. CMS and states have experienced increased survey workloads due to the greater intensity of nursing home oversight, the increasing number of initiatives, and growth in the number of Medicare and Medicaid providers requiring oversight. This workload growth required the prioritization of initiatives that, in some cases, has resulted in implementation delays for some key initiatives. The consensus-building process necessary to bring initiatives to fruition also has contributed to some delays. The initiatives likely will continue to compete for priority with other CMS programs, posing a challenge for efforts to further improve nursing home quality and safety. Greater nursing home oversight has increased demand on both CMS and state survey agency resources, causing delays for some key initiatives. CMS’s increased workload is evident in the labor-intensive state performance reviews. Since their introduction in October 2000, the reviews have been gradually expanded from nursing homes to several other Medicare and Medicaid providers, such as home health agencies and hospitals. CMS also has significantly increased the number of federal quality-of-care and fire safety comparative surveys. Such surveys are more labor-intensive than the alternative type of federal monitoring surveys, known as observational surveys, because they require an entire federal survey team rather than a smaller number of federal surveyors. The agency also has committed considerable resources to developing new data systems for complaints and enforcement actions while simultaneously increasing its use of available data to further improve federal and state oversight. Despite the increased workload, CMS implemented survey staff reductions of 5 percent in regional offices and 3 percent in its central office in January 2004. As of August 2005, these staff reductions have remained in effect. As state survey agency workloads grew with the implementation of the initiatives, they also experienced resource pressures. States are now required to conduct on-site revisits to ensure serious deficiencies have been corrected, investigate complaints alleging actual harm on-site and do so more promptly, and initiate off-hour standard surveys. Thus, surveyors’ presence in nursing homes has increased and surveyors’ work hours have effectively been expanded to weekends, evenings, and early mornings. The requirement to impose immediate sanctions on homes that repeatedly harm residents also has had a workload impact because in the past a grace period allowed homes to correct deficiencies before the sanctions went into effect. The imposition of immediate sanctions requires states to track, which some states do manually, the homes that must be referred for immediate sanctions and requires CMS and states to act to impose recommended sanctions that in the past would have been rescinded because the homes could have corrected the deficiencies during a grace period. While states’ budget pressures appear to be easing, many state survey agencies reported hiring freezes, staff vacancies, or high turnover as of July 2002 when all of these initiatives had already been fully implemented. The number of initiatives that CMS has implemented on its own has grown, further increasing its workload. For example, CMS added quality indicator data to its Nursing Home Compare Web site and has involved QIOs in helping nursing homes to improve quality of care. In addition, CMS created a task force to develop guidance intended to improve consistency across states in the imposition of civil money penalties. The number of nursing home initiatives simultaneously under development or being implemented as well as other CMS responsibilities, such as preparing to implement the new Medicare prescription drug benefit in January 2006, have necessitated the establishment of priorities and led to delays and queues. CMS assigned some initiatives, such as the development and public reporting of quality indicators, a high priority and implemented them swiftly despite issues related to their validity and the quality of the underlying data—problems that CMS is still working to address. In contrast, the revision of the survey process has encountered delays because of funding shortfalls and has been in process for 7 years. For example, initial testing of the new methodology in 2002 and 2003 was limited, even though CMS had already invested $4.7 million in its development from initiation in 1999 through September 2003. A pilot test of the new methodology is scheduled to begin in the fall 2005; depending on the results of the testing, implementation could begin in mid-2007. Although CMS attaches a high priority to enhancing the information available to the public on nursing home quality and safety, adding information on past noncompliance and the fire safety status of nursing homes are in a queue behind the programming required to implement higher-priority projects. There is also a regulatory queue, with other, higher-priority regulations ahead of the notice of proposed rule making to require retrofitting of nursing homes with automatic sprinklers. Delays in implementing the nursing home initiatives are also attributable to CMS’s need to be responsive to stakeholder input. Appropriately, CMS seeks input from various stakeholders such as states, regional offices, the nursing home industry, and resident advocates. For example, CMS sought input from experts in developing investigative protocols for surveyors. Due to this lengthy consultative process, combined with the prolonged delays stemming from internal disagreement over the structure of the process during the initial stages, CMS has only implemented two investigative protocols since 2001. Likewise, implementation of the ASPEN Complaint Tracking System was delayed because during the system’s pilot test, several states indicated their belief that their existing systems were superior and opposed the idea of either abandoning these systems or maintaining separate systems. Both the overall growth in providers and the anticipated growth in nursing homes pose additional workload challenges for CMS and states. In addition to nursing homes, CMS and states are responsible for surveys of other Medicare and Medicaid providers, such as home health agencies and hospitals. The number of these providers grew from 39,651 in October 2000 to 45,375 in January 2005, approximately 14 percent. While the number of nursing homes has decreased slightly during the same period, from 17,012 to 16,146, the rate of decline has slowed; and as the baby boom generation ages, increasing the number of elderly needing long-term care services, the number of nursing homes is expected to grow to meet the demand. In 2000, 35.1 million people were aged 65 or older. This number is expected to grow to about 54.7 million by 2020. Nursing home survey activities consume the majority of state survey budgets and resources. Nursing homes make up about 31 percent of Medicare and Medicaid providers, but account for 73 percent of the federal budget for oversight of such providers. The funding for nursing home surveys is disproportionate because the time frames for standard nursing home surveys are statutory. For those survey requirements not in statute, CMS determines the survey time frames; these surveys are therefore a lower priority. Even among nursing home survey activities, however, annual standard surveys are considered a higher priority than complaint surveys or initial surveys for which the statute does not dictate specific time frames. CMS and state survey agency officials recognize that CMS may have shifted its focus and resources to nursing homes at the expense of adequate oversight of other providers serving Medicare and Medicaid beneficiaries, and some states contend that the focus on nursing home standard surveys has hampered their ability to investigate nursing home complaints within mandated time frames. For example, according to a California state survey agency official, California law mandates that all nursing home resident complaints, not just complaints alleging actual harm, be investigated within 10 days. Likewise, an official from the Pennsylvania state survey agency stated that in Pennsylvania, all complaints must be investigated within 48 hours. California survey agency officials have told us that a complaint alleging a care problem deserves a higher priority than a standard survey, which may or may not identify deficiencies. According to CMS officials, key nursing home initiatives continue to compete for priority with other CMS projects. Examples of nursing home initiatives that have been affected include revision and testing of the new survey methodology, continued development of the investigative protocols that surveyors use to investigate care problems, and an increase in the number of quality-of-care comparative surveys. Revised survey methodology. CMS officials have indicated that nationwide implementation of the revised survey methodology could be affected if its use requires additional survey time or a greater number of surveyors to conduct each survey. The pilot test of the new methodology, scheduled for 2005 and 2006, includes an examination of steps to streamline the revised process, if necessary. Cost considerations limited the pilot of the new methodology to fewer states than the 20 that volunteered. Investigative protocols for quality-of-care problems. Only three sets of investigative protocols had been implemented as of November 2005, and it is unclear whether the contractor’s assessment of the protocols’ effectiveness can be completed before the contract ends in 2006. Furthermore, unless the contract for the investigative protocols is re-bid, CMS expects to return to the traditional revision process even though agency staff believe that the expert panel process used under the contract produced a high-quality product. Federal comparative surveys. CMS hired a contractor in 2003 to further increase the number of federal quality-of-care comparative surveys, but dropped funding for quality-of-care comparative surveys from the fiscal year 2006 contract. The agency reallocated the funds to help state survey agencies meet the increased survey workload resulting from growth in the number of other Medicare providers. CMS has focused considerable attention since 1998 on addressing weaknesses in state and federal oversight activities in order to better care for and protect nursing home residents. The agency has implemented many important improvements in the areas of surveys, complaints, enforcement, and oversight, such as taking steps to address survey predictability, issuing additional guidance to ensure timely on-site investigations of complaints alleging harm to residents, implementing an immediate sanctions policy to eliminate grace periods for homes cited for repeat serious violations, and strengthening oversight by conducting assessments of state survey activities. However, some key activities are still in process. For example, CMS’s effort to revise the survey methodology has been underway for 7 years. Given the pivotal role played by surveys in helping to ensure that nursing home residents receive high- quality care, the development and implementation of a more rigorous survey methodology is one of the most important contributions CMS can make to addressing oversight weaknesses. Certain other initiatives, such as sharing data with the public in an effort to use market forces to drive quality improvement, also remain in process. Since launching Nursing Home Compare in 1998, CMS has been aware of accuracy and reliability issues with the underlying data and began changing its approach to data integrity in 2005. The agency is working to address issues concerning data on nursing home staffing that compelled it to temporarily exclude questionable data from its Web site in July 2005 until its accuracy can be verified. Because consumers use these data to make decisions about nursing home care, ensuring the accuracy, reliability, and timeliness of nursing home quality data is critical. Even with CMS’s increased efforts to improve nursing home quality, the agency’s continued attention and commitment to these efforts is essential in order to maintain and build upon the momentum of its accomplishments to date. We provided CMS a draft of this report for review. CMS generally concurred with our findings, noting that progress has been made in many areas such as surveys and complaint investigations, oversight activities, and citation of serious deficiencies, but that challenges remain. (CMS’s comments are reproduced in app. V.) CMS also provided technical comments, which we included in the report as appropriate. We also provided the five states we contacted an opportunity to review the portion of the draft focused on trends in nursing home quality. California, Florida, Ohio, New York, and Texas provided written comments. California’s comments focused on clarifying its experience seeking CMS guidance on the definition of actual harm, but did not state whether it agreed with our findings. Ohio commented that our report’s findings related to continued inconsistency and understatement of serious deficiencies by state surveyors did not apply to its state survey agency. New York stated that including a more detailed description of states’ efforts to improve nursing home quality would provide a more balanced view of the reasons for the decline in serious deficiencies. Florida and Texas generally concurred, but Texas did not provide specific comments. CMS and states’ specific comments focused primarily on four issues: understatement of serious deficiencies, the definition of actual harm, data availability, and challenges to conducting nursing home survey and oversight activities. CMS commented that it remains concerned about the possible understatement or omission of serious deficiencies, but that it did not believe that understatement caused the decline in serious nursing home deficiencies or that understatement was worsening. CMS noted its efforts to work with states that fail to improve their ability to identify deficiencies such as withholding funding increases until corrective action plans are developed. Florida, New York, and Ohio similarly commented that efforts such as their states’ quality improvement initiatives, regulatory changes to improve nursing home operations, and engagement of the provider community have contributed to the decline. CMS suggested that including the results of observational surveys in our analysis of the percentage of federal surveys that found serious deficiencies missed by states would show that the percentage remained relatively constant from 2002 to 2004 rather than increasing. As we noted in our 1999 report, however, comparative surveys are more effective than observational surveys in identifying serious deficiencies missed by state surveyors because they are the only oversight tool that provides an independent federal survey where results can be compared to those of the state. Observational surveys can serve as an effective training tool for state surveyors but, in our view, they do not accurately represent typical state surveyor performance due to the likelihood that state surveyors modify their performance when they are aware that they are being observed by federal surveyors. Florida and Ohio noted that in addition to comparative surveys, CMS conducted many observational surveys during the time period studied. Ohio disagreed that our analysis of federal comparative surveys suggests that nursing home surveyors in Ohio missed serious deficiencies, citing its combined performance ratings for observational and comparative surveys. New York commented that federal comparative surveys often do not include the same resident sample used in the state survey and that only looking at comparative surveys provides a narrow analysis of state survey quality. New York suggested a more detailed analysis of comparative survey data and consideration of state performance review results. We note that, in 2002, CMS directed federal surveyors to include at least 50 percent of the residents included in the state survey sample. We also acknowledge that CMS is conducting state performance reviews as part of its oversight of state survey activities, but note that the reviews have shortcomings as described in our July 2003 report. Florida noted that our analysis of federal comparative surveys that identified missed serious deficiencies is based on limited data. We acknowledge that our analysis is based on a small number of surveys, but note that it includes the full universe of comparative surveys conducted from March 2002 through December 2004 in the five states we reviewed. The range of comments from states reinforces the need for CMS to clarify the definition of actual harm, as it plans to do. California noted that while some of its state surveyors were confused about the definition of actual harm, after discussions with CMS from 1998 through 2004, the survey agency and CMS are now in agreement on the definition of actual harm. New York stated that confusion about the definition of actual harm has been reduced. Ohio noted that its state surveyors are not confused by the definition of actual harm, but that states have not received clear and specific guidance from CMS. Florida agreed that clearer guidance would be useful. CMS indicated that it is taking steps to improve the reliability and accuracy of publicly reported data by identifying suspect data and posting more detailed information about past noncompliance. As we state in our report, we believe that consumers should have timely and accurate data to inform their decisions regarding nursing home care. CMS commented that the workload issues described in this report present challenges beyond those we have previously reported. CMS stated that continued constraint of resources could “likely cause some erosion of the gains already made” in the survey and oversight activities to date. To address the challenges it faces, CMS plans to increase efforts to improve productivity, determine the cost and value of policies, focus state performance standards on substantive issues, prioritize survey activities, coordinate with stakeholders, address increasing fuel costs, and enhance emergency preparedness. California, Florida, New York, and Ohio reiterated the staffing challenges they have experienced and the steps they have taken to address them, some of which are described in this report. Despite these efforts, California indicated that its staffing challenges have negatively impacted the investigative process. While we recognize the challenges CMS and states face, we continue to believe that maintaining the momentum developed over the last several years on key CMS initiatives, such as the development of the revised survey methodology (i.e., Quality Indicator Survey), is critical to addressing nursing home survey and oversight weaknesses. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Administrator of the Centers for Medicare & Medicaid Services and appropriate congressional committees. We also will make copies available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7118 or allenk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Table 10 summarizes our recommendations from 14 reports on nursing home quality and safety, issued from July 1998 through November 2004; CMS’s actions to address weaknesses we identified; and the implementation status of CMS’s initiatives. The recommendations are grouped into four categories—surveys, complaints, enforcement, and oversight. If a report contained recommendations related to more than one category, the report appears more than once in the table. For each report, the first two numbers identify the year in which the report was issued. For example, HEHS-98-202 was released in 1998. The Related GAO Products section at the end of this report contains the full citation for each report. Of our 36 recommendations, CMS has fully implemented 13, implemented only parts of 3, is taking steps to implement 13, and declined to implement 7. In order to identify trends in the proportion of nursing homes cited with actual harm or immediate jeopardy deficiencies, we analyzed data from CMS’s OSCAR database for four time periods: (1) January 1, 1999, through July 10, 2000; (2) July 11, 2000, through January 31, 2002; (3) February 1, 2002, through July 10, 2003; and (4) July 11, 2003, through January 31, 2005. Because surveys are conducted at least every 15 months (with a required 12-month statewide average), it is possible that a home was surveyed twice in any time period. To avoid double counting of homes, we included only homes’ most recent survey from each time period. In order to determine the predictability of nursing home surveys, we analyzed data from CMS’s OSCAR database for a home’s current survey as of April 9, 2002, and as of July 8, 2005 (see table 12). We considered surveys to be predictable if homes were surveyed within 15 days of the 1-year anniversary of their prior survey. Appendix IV: Percentage of State Nursing Home Surveyors with 2-Years’ Experience or Less, 2002 and 2005 This state did not respond to our 2002 questions about surveyor experience. In addition to the contact named above, Walter Ochinko, Assistant Director; Jack Brennan; Joanne Jee; Elizabeth T. Morrison; and Christal Stone made key contributions to this report. Nursing Home Deaths: Arkansas Coroner Referrals Confirm Weaknesses in State and Federal Oversight of Quality of Care. GAO-05-78. Washington, D.C.: November 12, 2004. Nursing Home Fire Safety: Recent Fires Highlight Weaknesses in Federal Standards and Oversight. GAO-04-660. Washington D.C.: July 16, 2004. Nursing Home Quality: Prevalence of Serious Problems, While Declining, Reinforces Importance of Enhanced Oversight. GAO-03-561. Washington, D.C.: July 15, 2003. Nursing Homes: Public Reporting of Quality Indicators Has Merit, but National Implementation Is Premature. GAO-03-187. Washington, D.C.: October 31, 2002. Nursing Homes: Quality of Care More Related to Staffing than Spending. GAO-02-431R. Washington, D.C.: June 13, 2002. Nursing Homes: More Can Be Done to Protect Residents from Abuse. GAO-02-312. Washington, D.C.: March 1, 2002. Nursing Homes: Federal Efforts to Monitor Resident Assessment Data Should Complement State Activities. GAO-02-279. Washington, D.C.: February 15, 2002. Nursing Homes: Sustained Efforts Are Essential to Realize Potential of the Quality Initiatives. GAO/HEHS-00-197. Washington, D.C.: September 28, 2000. Nursing Home Care: Enhanced HCFA Oversight of State Programs Would Better Ensure Quality. GAO/HEHS-00-6. Washington, D.C.: November 4, 1999. Nursing Home Oversight: Industry Examples Do Not Demonstrate That Regulatory Actions Were Unreasonable. GAO/HEHS-99-154R. Washington, D.C.: August 13, 1999. Nursing Homes: Proposal to Enhance Oversight of Poorly Performing Homes Has Merit. GAO/HEHS-99-157. Washington, D.C.: June 30, 1999. Nursing Homes: Complaint Investigation Processes Often Inadequate to Protect Residents. GAO/HEHS-99-80. Washington, D.C.: March 22, 1999. Nursing Homes: Additional Steps Needed to Strengthen Enforcement of Federal Quality Standards. GAO/HEHS-99-46. Washington, D.C.: March 18, 1999. California Nursing Homes: Care Problems Persist Despite Federal and State Oversight. GAO/HEHS-98-202. Washington, D.C.: July 27, 1998. | Since 1998, GAO has issued numerous reports on nursing home quality and safety that identified significant weaknesses in federal and state oversight. Under contract with the Centers for Medicare & Medicaid Services (CMS), states conduct annual nursing home inspections, known as surveys, to assess compliance with federal quality and safety requirements. States also investigate complaints filed by family members or others in between annual surveys. When state surveys find serious deficiencies, CMS may impose sanctions to encourage compliance with federal requirements. GAO was asked to assess CMS's progress since 1998 in addressing oversight weaknesses. GAO (1) reviewed the trends in nursing home quality from 1999 through January 2005, (2) evaluated the extent to which CMS's initiatives have addressed survey and oversight problems identified by GAO and CMS, and (3) identified key challenges to continued progress in ensuring resident health and safety. GAO reviewed federal data on the results of state nursing home surveys and federal surveys assessing state performance; conducted additional analyses in five states with large numbers of nursing homes; reviewed the status of its prior recommendations; and identified key workforce and workload issues confronting CMS and states. CMS's nursing home survey data show a significant decline in the proportion of nursing homes with serious quality problems since 1999, but this trend masks two important and continuing issues: inconsistency in how states conduct surveys and understatement of serious quality problems. Inconsistency in states' surveys is demonstrated by wide interstate variability in the proportion of homes found to have serious deficiencies--for example, about 6 percent in one state and about 54 percent in another. Continued understatement of serious deficiencies is shown by the increase in discrepancies between federal and state surveys of the same homes from 2002 through 2004, despite an overall decline in such discrepancies from October 1998 through December 2004. In five large states that had a significant decline in serious deficiencies, federal surveyors concluded that from 8 percent to 33 percent of the comparative surveys identified serious deficiencies that state surveyors had missed. This finding is consistent with earlier GAO work showing that state surveyors missed serious care problems. These two issues underscore the importance of CMS initiatives to improve the consistency and rigor of nursing home surveys. CMS has addressed many survey and oversight shortcomings, but it is still developing or has not yet implemented several key initiatives, particularly those intended to improve the consistency of the survey process. Key steps CMS has taken include (1) revising the survey methodology, (2) issuing states additional guidance to strengthen complaint investigations, (3) implementing immediate sanctions for homes cited for repeat serious violations, and (4) strengthening oversight by conducting assessments of state survey activities. Some CMS initiatives, however, either have shortcomings impairing their effectiveness or have not effectively targeted problems GAO and CMS identified. For example, CMS has not fully addressed issues with the accuracy and reliability of the data underlying consumer information published on its Web site. The key challenges CMS, states, and nursing homes face in their efforts to further improve nursing home quality and safety include (1) the cost to older homes to be retrofit with automatic sprinklers to help reduce the loss of life in the event of a fire, (2) continuing problems with hiring and retaining qualified surveyors, and (3) an expanded workload due to increased oversight, identification of additional initiatives that compete for staff and financial resources, and growth in the number of Medicare and Medicaid providers. Despite CMS's increased nursing home oversight, its continued attention and commitment are warranted in order to maintain the momentum of its efforts to date and to better ensure high-quality care and safety for nursing home residents. CMS generally concurred with the report's findings. CMS noted several areas of progress in nursing home quality and identified remaining challenges to conducting nursing home survey and oversight activities. |
SEC and MSRB are the primary entities that have authority at the federal level with respect to disclosure to investors in municipal securities. In the context of municipal securities, SEC interprets and enforces the federal securities laws, including by adopting rules and enforcing antifraud provisions; maintaining regimes for the registration, compliance, inspection, and education of broker-dealers, investment advisers, and municipal advisors; and providing general educational materials for investors about investing in municipal securities. SEC also oversees MSRB and FINRA. MSRB maintains an online repository of information (that is, EMMA) to promote market transparency, for example, by providing access to primary market and continuing disclosures and information about trade pricing. MSRB also provides educational materials to investors and issuers of municipal securities. MSRB regulates brokers, dealers, municipal securities dealers, and municipal advisors in the municipal securities market by adopting rules governing their conduct, which are subject to SEC approval. However, MSRB does not have authority to enforce these rules or examine entities for compliance with these rules. Rather, SEC, FINRA, and bank regulators enforce MSRB rules and conduct compliance examinations. The Tower Amendment prohibits SEC and MSRB from directly requiring state and local governments to submit information to them prior to sale. Specifically, SEC and MSRB cannot require issuers to file any information with them prior to any sale, and MSRB also cannot require issuers to provide them or investors with any information either pre- or postsale. Furthermore, the Securities Act of 1933 exempts the securities that state and local governments issue from registration with SEC. Securities of state and local governments also are exempt from the periodic disclosure requirements of the Exchange Act of 1934. According to a 1975 Senate report, congressional reasons for the decision to continue to limit direct regulation of issuers when the 1975 amendments to the securities acts were enacted included respect for the rights of state governments to access the capital markets, concerns about the costs of regulation for state and local government issuers, and the perceived lack of abuses in the municipal market that would justify such an incursion on the states’ prerogatives. While federal regulators are prohibited from directly requiring issuers to file presale information on municipal securities, SEC has adopted rules— applicable to broker-dealers acting as underwriters—that relate to primary More specifically, using its authority market and continuing disclosures. over broker-dealers and its broad authority to prevent fraud in connection with the offer, purchase, or sale of securities, SEC adopted Rule 15c2-12 in 1989. The rule established disclosure requirements related to municipal securities in response to the need SEC found for increased transparency.underwriters of municipal securities to obtain and review issuers’ official statements (typically prepared by issuers or their advisors) and provide This rule and accompanying guidance obligates them to investors.underwriters to reasonably determine that issuers have entered into a written continuing disclosure agreement for the benefit of municipal securities holders to provide (1) annual financial information and operating data of the type included in the official statement and when and if available, audited financial statements, and (2) notices of certain material events. Beginning in 2009, issuers have been obligated by the agreement to provide the continuing disclosure information and data to EMMA (either directly or by engaging a third-party dissemination agent to submit such information and data on their behalf). For a comparison of key federal disclosure requirements for publicly offered municipal and corporate securities, see appendix II. EMMA may be accessed at http://emma.msrb.org. insolvency, receivership and similar events.voluntarily submit to EMMA on a continuing basis any other types of financial, operating, or event-based information, including (but not limited to) information about bank loans, quarterly or monthly financial information, consultant reports, and capital or other financing plans. According to MSRB, users may conduct searches of issuances on EMMA using one or more of the following parameters: Committee on Uniform Security Identification Procedures (CUSIP) number, issuer name, issue description, obligated person name, state, maturity date, date of issuance, interest rate, and ratings. EMMA provides additional parameters to search for continuing disclosure documents and other information available through EMMA, including trade information. Many market participants told us that primary market disclosure for municipal securities investors is generally useful. However, investors and market participants with whom we spoke frequently cited limitations to continuing disclosure, including the timeliness of annual financial information, the frequency with which information is provided, and incomplete information. Our analysis found that current regulatory requirements for municipal securities disclosure broadly reflect the seven principles of effective disclosure that were developed by the International Organization of Securities Commissions and certain plain English principles developed by SEC. However, regulators and market participants indicated that, in practice, limitations exist in the current regulatory scheme. Further, the effect on individual investors of limitations to disclosure is largely unknown. Many market participants told us primary market disclosure is generally useful for investors and EMMA has improved investor and market participants’ ability to access disclosure documents, but investors and market participants have identified limitations to disclosure. According to MSRB officials, participants in the municipal securities market generally acknowledge that primary market disclosure—official statements— contains the material facts an investor needs to know about an issuer and a security. Investors and market participants such as groups representing investment companies, bond lawyers, and broker-dealers also have indicated that the information provided at the time of issuance is comprehensive. Additionally, most market participant groups with whom we spoke said that MSRB’s EMMA website has greatly improved public access to disclosure compared with the prior system for obtaining disclosure documents.central repository for disclosure documents has made finding information easier and more efficient. In addition, one market participant noted that EMMA increased access to information by allowing investors to receive disclosures for free. Several market participants said that having a However, investors and other market participants cited limitations to the information provided in continuing disclosures. The most frequently cited limitations to the usefulness of this information were: the timeliness of annual financial information, the frequency with which issuers and other obligated persons provided information, and the completeness of the information provided in accordance with the continuing disclosure agreement. Individual investors also frequently cited the readability of disclosures as a limitation. Timeliness—According to investors and other market participants, issuers release annual financial information too long after the end of their fiscal years for the information to be useful in making investment decisions. A Governmental Accounting Standards Board (GASB) study of audited annual financial reports prepared in accordance with generally accepted accounting principles (GAAP) for state and local governments provided for fiscal years ending in 2006, 2007, and 2008 found that the average time frame for issuing the reports varied by type and size of government. For example, issuance time frames averaged from 126 days after the end of the fiscal year for large special districts to 244 days for small counties. Similarly, according to National Association of State Comptrollers data, in fiscal year 2010 states took an average of 198 days to complete their comprehensive annual financial reports. Also, in our case study of disclosures for 14 securities in EMMA that were issued in 2009, our analysis found that the number of days after the end of the fiscal year in which issuers or obligated persons provided annual financial information to EMMA varied. Annual financial information for a nonprofit hospital system was provided 55 days after the end of a fiscal year, while annual financial information for a general obligation security issued by a school district was provided 257 days after the end of a fiscal year. Several investors and market participants said that filings provided well after the end of the fiscal year limit the information’s usefulness. The GASB study found similar sentiments: less than 9 percent of survey respondents—who represented a range of users of financial information—considered information received 6 months after the end of the fiscal year to be very useful. Three market participants further indicated that untimely information was particularly worrisome for investors at a time when state and local governments have been facing credit stress. Moreover, evidence indicates that issuers have not always met the time frames by which they agreed to provide annual information. For example, a study by the California Debt and Investment Advisory Commission of certain securities issued in California from 2005 to 2009 found 11 percent filed more than 30 days after the agreed-upon date. Frequency—Investors and other market participants also said that receiving financial information annually was not sufficient to monitor the financial condition of an issuer. An individual investor with whom we spoke and a professional analyst both noted that because disclosures generally are provided annually, investors often must learn important information about the current financial condition of an issuer from external sources such as newspaper articles. For instance, the individual investor told us that he may read in the newspaper that a town that issued securities was having budget problems; however, this information would not be disclosed to investors until the financial statements for the fiscal year were released at some point during the following year. A few market participants indicated that obtaining information more frequently was especially important for investors during times of economic stress. Information that investors and other market participants indicated would be useful to receive between annual financial statements included unaudited quarterly financial reports, cash-flow reports, year- to-date budget updates, and tax revenue information. This type of information is not routinely submitted to EMMA on a voluntary basis. According to MSRB data, in 2011 EMMA received 8,290 submissions categorized as quarterly or monthly financial information (which represents 6 percent of all continuing disclosure documents submitted) and 358 submissions categorized as interim, additional financial information, or operating data. Similarly, quarterly or monthly financial information was available in EMMA for 5 of the 14 securities in our case study. Completeness—Investors and other market participants said that issuers do not always provide all of the financial information, event notices, or other information they agreed to provide in a continuing disclosure agreement for the lifetime of a security. For example, three market participants told us that issuers and obligated persons did not always file the annual financial information that they agreed to provide. Our case study of disclosures for 14 securities in EMMA identified similar issues in a few cases. In particular, our analysis found that 2 securities—a 2009 general obligation security for a small issuer and a 2009 conduit offering with a publicly traded corporation as the obligated person—had no financial or other continuing disclosures as of May 2012. In addition, a few market participants indicated that filings of event notices can be delayed significantly or that event notices may never be filed. For instance, one market participant said that he had noted cases in which issuers failed to report unscheduled draws on debt service or adverse tax opinions. Such lapses in reporting may also go undetected. As one market participant and a FINRA official noted, an investor or regulator may not be able to ascertain if a reportable event had occurred unless an event notice was filed with EMMA. Finally, a few market participants told us that issuers may provide all of the information they agreed to provide for the first few years after a security was issued, but afterwards fail to provide some of the information—for instance, tax data or operating information. Readability—Individual investors commonly cited concerns about the readability of disclosure documents. Four of the 12 individual investors with whom we spoke said that disclosure documents were not easy to read or understand. Furthermore, most individual investors with whom we spoke said that they generally had limited time to decide whether to buy a security, leaving them little time to research a security. Several individual investors and market participants said that disclosures were difficult to understand because they contained extensive legal or technical terminology and complex information. In addition, two of the individual investors with whom we spoke noted that the information for which they looked was “buried” in the disclosure documents and not easy to find. To a lesser extent, investors and other market participants identified additional limitations—including the lack of standardization and limitations of EMMA—to the usefulness of disclosure. Several market participants said that the lack of standardization of disclosure across different issuers impeded their ability to compare issuances. MSRB also stated in a comment letter to SEC in 2011 that many investors have told MSRB that the lack of standardization in disclosure is a problem. MSRB indicated that elements that market participants would like to see standardized included the use of bond proceeds and some basic information, such as the manner of reporting the name of the issuer or other obligated person and information on the source of repayment. Two market participants also told us standardization of accounting methods and the format of disclosures would make disclosures more useful. Also, investors and other market participants described several aspects of EMMA that limited its usefulness. Several market participants noted that the ability to search for issuances was limited, especially if users did not have the CUSIP number for the issuance in which they were interested. Two market participants noted that continuing disclosures sometimes were categorized incorrectly by the issuer at submission to EMMA and one said that issuers may not submit disclosures for all issuances to which they applied. We compared requirements for continuing disclosure in SEC Rule 15c2- 12 and SEC’s antifraud authorities with principles for effective disclosure that were developed by an international organization of securities commissions, which included SEC, and certain plain English principles developed by SEC. These principles include allocation of accountability, continuing disclosure obligation, disclosure criteria, dissemination of information, equal treatment of disclosure, timeliness, and use of plain English in official statements (see app. III for the comparison of the principles and municipal securities disclosure requirements). We found that current regulations broadly reflect the seven principles of effective disclosure. However, regulators and market participants have indicated that, in practice, limitations exist in the current regulatory scheme, including the areas of enforceability, content, and efficiency. In particular, they noted the following: Allocation of accountability—Although security holders may enforce continuing disclosure agreements by bringing suit against the issuer or obligated person, SEC staff told us that they were not aware of any public statements about any such lawsuits having occurred. Also, two market participants noted that market participants other than investors directly holding a security have no remediation should an issuer or obligated person not provide disclosure. SEC and MSRB cannot enforce continuing disclosure agreements. In a comment letter to SEC in 2011, MSRB stated that because Rule 15c2-12 does not impose penalties for noncompliance with continuing disclosure agreements, there is limited accountability for those issuers or obligated persons that do not provide the information. In addition, various regulatory incentives that could encourage issuers to comply with their agreements have limitations, according to regulators and market participants. For example, regulatory requirements to disclose at issuance failure to comply with prior continuing disclosure agreements only may work as an incentive to encourage issuers to make required disclosures if they anticipate issuing a new security in the future. Moreover, SEC staff and two market participants indicated that even if issuers anticipated issuing a future security they might not be sufficiently incentivized to keep up with their disclosure obligations between issuances, as some issuers may only go to market with a new issuance from every 3 to 5 years. There are other reasons why issuers may not keep up with continuing disclosure responsibilities between issuances. A few market participants who work with issuers to help prepare disclosures told us that some issuers face challenges in complying with their continuing disclosure agreements because of a lack of awareness or understanding of their disclosure responsibilities, which members of one market participant group said can be due to staff turnover and competing priorities in times of budgetary challenges. Two small issuers with whom we spoke said that they do not have staff dedicated to issuing and monitoring debt, which presents a challenge in preparing disclosures. Continuing disclosure obligation—As a condition of an underwriting, an underwriter must reasonably determine that issuers or obligated persons have agreed to provide certain information on a continuing basis. However, this requirement is placed on the underwriter, not directly on the issuer of the security. Regulators and market participants have noted that this requirement on the underwriter is inefficient for several reasons. For example, representatives of two market participant groups told us that an underwriter often does not have an opportunity to influence the content of a continuing disclosure agreement before an issuance, although it is the underwriter’s responsibility to ensure the agreement specifies that all required information will be provided. Also, FINRA staff noted that should an issuer or obligated person not provide continuing disclosure information after a security is issued, the underwriter has no means to compel them to do so. Furthermore, representatives of two market participant groups and a former regulator have indicated that a disproportionate amount of the regulatory burden for municipal disclosure falls on underwriters. In addition, Rule 15c2-12 does not expressly require underwriters to document how they comply with requirements to reasonably determine that the issuer or obligated person agreed to provide continuing disclosures and that they are likely to comply with their continuing disclosure agreement. SEC staff said that without this documentation, it may be difficult for an underwriter to demonstrate that it met its obligations. Timeliness—Although Rule 15c2-12 requires, as a condition of an underwriting, that an underwriter must reasonably determine that issuers or obligated persons have specified a date by which they agreed to provide the annual financial report, it does not specify what that date should be. Therefore, issuers and obligated persons may provide annual financial information months after the close of the fiscal year. Use of plain English—Regulations do not require the use of plain English in municipal securities’ official statements or other disclosure documents. SEC recognizes some of these limitations and has taken recent actions to improve the timeliness and completeness of continuing disclosures. Specifically, in June 2010, SEC amended Rule 15c2-12 and issued interpretive guidance to specify that event notices be submitted to EMMA in a timely manner not in excess of 10 business days of the occurrence of the underlying event, rather than merely “in a timely manner” as was previously required; remove the general materiality condition for determining whether notice of an event is to be submitted to EMMA—thereby, requiring that notification be provided for certain events when they occur regardless of whether they are determined to be material (including principal and interest payment delinquencies, and unscheduled draws on debt service reserves reflecting financial difficulties, among others), while adding separately a materiality condition to select events (including nonpayment-related defaults and bond calls); increase the number of events for which notice must be provided; remove an exemption from reporting disclosure information for certain variable-rate securities; and reaffirm its previous interpretation that underwriters must form a reasonable belief in the accuracy and completeness of representations made by issuers or other obligated persons in disclosures as a basis for recommending the securities, including making a reasonable determination that the issuer will likely provide the continuing disclosure information it agreed to provide. The risk posed to investors by the limitations of disclosure regulations cited by market participants is largely unknown because (1) there is limited information about the extent to which investors use disclosures to make investment decisions, (2) there is limited information about the extent to which disclosure limitations about which investors were concerned actually have occurred, and (3) there is a low incidence of defaults and other characteristics of the municipal market that mitigate investor risk. Nevertheless, SEC and MSRB have continuing concerns about disclosure in the municipal market. There is limited information about the extent to which individual investors in municipal securities use disclosures to make investment decisions. Regulators and market participants with whom we spoke did not have overall information on the extent to which individual investors in municipal securities rely on disclosures in making their investment decisions. However, anecdotal evidence suggests that individual investors’ reliance on disclosures could be limited. For example, 5 of the 12 individual investors with whom we spoke said that they relied solely on their broker- dealers’ advice when making an investment decision, while the others said that they conducted their own research into securities. Of the 5 investors who said they relied solely on their broker-dealers’ advice, 2 indicated that they did not rely on disclosures because of the difficulty of understanding them. Similarly, a 2008 SEC study of investor usage of disclosure documents for stocks, bonds, and mutual funds found that only 2 percent of investors surveyed cited SEC-mandated disclosure documents—prospectuses and annual reports—as the most important source of investment information. Rather, surveyed investors most frequently cited financial advisors or brokers as the most important source of investment information. Although the SEC study did not focus on municipal securities, some of the investors surveyed also may have invested in these types of securities. More importantly, the study provides a general indication of investors’ usage of disclosure documents and other sources of information. Many individual investors with whom we spoke also said that a security’s credit rating has been a main factor in making investment decisions. However, several of these investors told us that they have less faith in credit ratings than they did before the financial crisis, potentially making disclosure information a more important factor in their future investment decisions. There is also limited information on the extent to which events relating to limitations to disclosure cited by investors—such as issuers and other obligated persons failing to submit information or submitting information late—have occurred. MSRB has limited ability to track issuances with missing or late disclosure for several reasons. For example, MSRB reported that in 2011 it received 1,879 required notices of failure to provide annual financial information; however, MSRB staff told us that they could not reliably determine the universe of issuances in EMMA for which annual financial information was required. This is because EMMA did not have the capability to easily or systematically differentiate between securities that should have disclosure submissions and those that are exempt from SEC Rule 15c2-12, according to MSRB staff. In addition, it is difficult for MSRB or others to develop reliable information about issuer compliance with their continuing disclosure agreements for the universe of outstanding issuances because the structure of continuing disclosure obligations can vary by issuance, making compliance with continuing disclosure agreements difficult to systematically identify and track. As a result, there is limited information on the extent of the problems. The low levels of defaults on municipal securities and other characteristics of the municipal securities market also make it difficult to determine the importance of disclosure documents as a means of investor protection. Long-term default rates associated with rated municipal securities have been less than 1 percent, which is significantly lower than long-term default rates for rated corporate debt securities (see table 1). In addition, municipal bankruptcy filings historically have been rare compared with bankruptcy filings by businesses. For 1991 through 2009, 177 municipalities filed for bankruptcy. In contrast, more than 49,000 businesses filed for bankruptcy in the 12-month period ending March 31, 2009. In addition, state and local government issuers have a strong incentive to meet their payment obligations because issuances of municipal securities constitute an important tool to finance critical projects, and defaults may hinder their ability to issue future securities and may adversely affect other issuers of municipal securities in the surrounding area. Defaulted municipal securities also have a relatively high recovery rate for investors compared to corporate securities, according to two rating agencies—with one reporting a recovery rate of 67 percent for municipal securities compared with 40 percent for corporate securities. Some states also have mechanisms intended to address financial crises, allowing for state intervention into a local government’s finances. Finally, 30 states have laws that give holders of general-obligation and certain other securities issued by municipalities within their states first rights to repayment from certain revenue streams, even during bankruptcy. Nevertheless, SEC and MSRB have expressed continuing concerns about municipal securities disclosure due to individual investors constituting a significant portion of the market, the size of the market, default risk, and incomplete disclosure. SEC staff told us that disclosure by municipal issuers should be improved in general as it relates to the primary market and continuing disclosure. In rulemakings, SEC staff have noted concern about the size of the municipal securities market and that, while defaults of municipal securities are rare, they do occur. Furthermore, the significant pressure on state and local government budgets and the diminishment of bond insurance since the recent financial crisis have increased focus on disclosure issues, according to SEC and MSRB staff. MSRB staff noted in 2010 that although the municipal securities market largely weathered the 2008 financial crisis, economic conditions and financial liabilities continued to stress municipal bond issuers. SEC staff also told us that recent SEC enforcement actions highlight risks posed by pension funding obligations. Furthermore, SEC and MSRB leadership have publicly voiced concerns about various aspects of the municipal securities market. Examples of their concerns include the following: An SEC Commissioner remarked that investors may have trouble understanding the risks associated with increasingly complex structures used by large and small municipalities. A former SEC Chairman stated that the opacity of the municipal market was unrivaled and presented a significant threat to the U.S. economy. An SEC Commissioner was concerned that investors might not have access to the information they needed to accurately calculate their risks when making investment decisions. She stated at an SEC field hearing on the municipal market in 2010 that municipal market investors were afforded second-class treatment compared with that afforded to investors in other securities because they could not count on receiving accurate and timely financial and other material information about their investments. MSRB stated in a comment letter to SEC in 2011 that MSRB received complaints about some issuers’ disregard for their continuing disclosure agreements and failure to provide information on time or at all. Experts and market participant groups we surveyed suggested a number of options for improving municipal securities disclosure. Some of the options would require statutory changes while others could be achieved within existing statutory authority. Each of the suggested options involves trade-offs, and market participants and the regulators’ views on the efficacy of the options varied. Our discussion of potential benefits provided to investors and potential costs of implementing these options is limited to the views of survey and interview participants. Experts and market participant groups we surveyed suggested some options to improve disclosure that would require statutory changes. While many suggested repealing the Tower Amendment, regulators said it would have no effect on what they could require issuers to disclose. SEC staff said the Securities Act exempts municipal issuers from SEC registration requirements. MSRB does not otherwise have affirmative authority to regulate issuers. They said additional statutory changes would be needed for regulators to implement other options we identified for improving disclosure that included prescribing accounting standards, requiring time frames for annual reporting or more frequent disclosure, and requiring certain conduit borrowers to comply with corporate disclosure requirements. Seven of 21 experts we surveyed suggested that Congress repeal the Tower Amendment—provisions that prohibit SEC and MSRB from requiring issuers to file any information with them prior to any sale, and MSRB from requiring issuers to provide them or investors with any information pre- or postsale. Some experts believe that repealing these provisions would allow federal regulators to directly require issuers to provide continuing disclosures, and thereby address concerns about incomplete submissions or failures to meet obligations under continuing disclosure agreements, but SEC and MSRB staff did not agree and said additional changes would be needed for them to directly regulate issuers. As noted previously, the Tower Amendment prohibits SEC and MSRB from requiring state and local governments to file presale information with them in connection with the issuance, sale, or distribution of municipal securities. MSRB is further limited by a prohibition against requiring any issuer to furnish it or any purchaser or prospective purchaser with any document or report about the issuer, except for documents and information that generally are available from a source other than the issuer. Some industry participants believe the Tower Amendment prohibits any regulation of municipal issuers, while others believe its scope is narrower and addresses only prefiling requirements. SEC staff noted that repealing the Tower Amendment would have no real effect on disclosure because of exemptions under the Securities Act. SEC and MSRB staff agreed that repealing the Tower Amendment would remove a prohibition on requiring issuers to file presale information. However, they said such repeal would have no effect on their ability to establish disclosure requirements for issuers with respect to primary or continuing disclosures. SEC staff told us that the Securities Act provision that broadly exempts municipal securities from SEC’s registration requirements means that the registration requirements applicable to corporate issuers do not apply to municipal securities offerings. In addition, the periodic reporting requirements of the Exchange Act do not apply to issuers of such municipal securities. MSRB does not otherwise have affirmative authority to regulate municipal issuers. As a result, SEC and MSRB staff told us that Congress may need to provide SEC or MSRB with affirmative authority or amend exemptions under federal securities laws to establish disclosure requirements directly on municipal securities issuers. Four market participant groups we surveyed and others (including issuers) with whom we spoke discussed potential challenges to issuers of expanding regulator authority. They expressed concern over the costs of federal regulation as well as the potential infringement on state and local government rights. According to a market participant group with whom we spoke, an increase in the costs of accessing the market could prohibit some issuers from raising capital in the public market and lead some issuers to pursue other options for raising capital, such as through private bank loans. In addition, a market participant group representing issuers said the basic tenets of federalism and the importance of federal-state comity behind the Tower Amendment were important considerations in weighing potential options for improving municipal disclosure. While neither SEC nor MSRB had indicated to us they were seeking additional authority to regulate issuers, SEC staff indicated that additional authority would be helpful to improve disclosure by municipal issuers. Staff of each regulator had similar views on how to most appropriately use any additional authority that could be granted to regulate disclosure by municipal issuers. First, staff generally agreed that the securities registration regime for public companies would be inappropriate for the municipal securities market. With approximately 50,000 issuers and 1.3 million separate outstanding securities, SEC staff said the additional resources potentially needed to review and declare effective registration statements would be extensive, and an MSRB official said regulating municipal issuers would be beyond MSRB’s current resource capabilities. Second, SEC and MSRB staff recognize that potential continuing disclosure requirements could have costs for issuers, such as small or infrequent issuers, although limited information exists on the universe of issuers, and issuers that might be affected. Third, SEC and MSRB staff told us broad-based or marketwide standardized disclosure would not be favorable for the municipal market. Rather, SEC staff told us disclosure requirements could be principles-based. Principles-based disclosure is an approach that would involve establishing key objectives of good reporting and providing guidance and examples to explain each objective. MSRB staff agreed that disclosure requirements should be tailored, noting that the market is highly diverse in terms of the structure of financings and the issuing community. Staff from both regulators said any disclosure requirements for municipal securities issuers would need to reflect the diversity of issuers as well as the federal interest in investor protection. Fourth, SEC and MSRB staff stated that regulation of municipal securities must balance investor protection and intergovernmental comity. For example, SEC staff told us any federal regulation of municipal securities disclosure should be flexible and adaptable, so that regulators could account for issues of comity and other political realities present in the municipal market. In addition to repealing the Tower Amendment, many of the experts and market participants we surveyed identified additional options that would require statutory changes. These include prescribing accounting standards and requiring time frames for annual reporting, requiring more frequent disclosure, or requiring certain conduit borrowers to comply with corporate disclosure provisions. According to SEC and MSRB staff, Congress would need to provide SEC or MSRB with authority to implement any of these options. Five of 21 experts we surveyed and a market participant group with whom we spoke suggested federal regulators should prescribe accounting standards for the financial information issuers disclose in EMMA. These suggestions included that SEC should be provided authority to prescribe accounting standards or regulators should require issuers to comply with GAAP for state and local governments.participant groups suggested regulators should have authority to simplify GAAP standards to more efficiently meet investor needs and reduce compliance costs for issuers. According to MSRB staff, Congress could provide MSRB authority to regulate issuers and authorize accounting standards without needing to repeal the Tower Amendment. According to MSRB staff, without statutory changes, MSRB could use existing authority to prohibit broker-dealers from underwriting new securities without an issuer of such securities committing to follow GAAP or other accounting standards. However, an MSRB official also told us that approach would be less effective than directly regulating issuers, an unreasonable burden on broker-dealers, be difficult to comply with and enforce, and could be viewed as an indirect obligation for issuers. According to an expert, a market participant group, and SEC staff with whom we spoke, standardized accounting requirements could benefit investors by facilitating comparability of financial information across different issuers and securities, and make annual financial information easier to understand, particularly for individual investors. We previously reported that many industry participants think GAAP-basis financial statements provide a fuller, more transparent picture of a government’s financial position than those prepared in accordance with other bases of accounting. Reporting of pension liability is one of the areas that market participants and experts we surveyed said should be improved. Appendix IV provides information on several industry-driven efforts to improve pension liability reporting in municipal securities disclosure documents. See GAO, Dodd-Frank Wall Street Reform Act: Role of Governmental Accounting Standards Board in the Municipal Securities Markets and Its Past Funding, GAO-11-267R (Washington, D.C.: Jan. 18, 2011). requirement to follow GAAP would be an unfunded mandate, particularly for small or infrequent borrowers because they would be required to invest in the staff time and expertise to prepare financial statements they would not otherwise prepare. Some issuers also questioned the potential benefits to investors of mandated GAAP compliance, saying that statements that comply with GAAP provide too much irrelevant information to investors. SEC has statutory authority to establish financial accounting and reporting standards for publicly held companies, but has looked to private-sector standard-setting bodies to develop these accounting principles and standards. For example, SEC had recognized the Accounting Principles Board as the authoritative source for GAAP until 1973. Since the formation of the Financial Accounting Standards Board (FASB) in 1973, SEC has designated FASB as the private-sector standard-setter whose accounting principles are recognized as “generally accepted” for purposes of federal laws for public companies. The Sarbanes-Oxley Act of 2002 established criteria that must be satisfied for the work product of an accounting standard-setting body to be recognized as “generally accepted.” Sarbanes-Oxley Act of 2002, Pub. L. No. 107-204, § 108, 116 Stat. 745 (2002). In 2003, SEC reaffirmed FASB as the private-sector standard setter. Commission Statement of Policy: Reaffirming the Status of the FASB as a Designated Private-Sector Standard Setter, Securities Act Release No. 8221, Exchange Act Release No. 47,743 (Apr. 25, 2003). setting body, such as GASB, as “generally accepted.” In a 2010 speech at a securities regulation seminar, the Commissioner identified options for improving municipal securities disclosure that SEC would examine in an ongoing review of the municipal securities market.mandating the use of uniform accounting standards, such as GAAP standards. Four of 21 experts and 4 of 21 market participant groups we surveyed, and 2 market participant groups we interviewed, suggested that federal regulators should require issuers to submit annual financial statements and operating information on a timely basis. Suggestions included that state and local government issuers should meet a standard of 120 or 180 days, or adhere to the same standard as corporate issuers. Improving timeliness could benefit the market by helping build investor confidence in a particular security or issuer and thereby increase investor demand for municipal securities, according to 3 of the market participant groups and 1 of the experts. In turn, increased demand theoretically could improve pricing and increase liquidity, but to what extent this would be the case is unknown. While SEC requires in Rule 15c2-12, as a condition of an underwriting, that an underwriter must reasonably determine that the issuer or obligated person has agreed in a continuing disclosure agreement to specify the date on which annual financial information will be provided, SEC does not have authority to enforce this aspect of the agreement. Issuers discussed potential challenges of meeting shorter annual reporting time frames. Large issuers (including states, cities, and a county) told us a dependence on other entities—including component units of government—for information could prohibit entities that satisfy their annual reporting obligation by submitting audited financial statements from completing audited financial statements in shorter time frames. A state conduit issuer said some issuers might rely on a state to reconcile Medicare payments after the close of the fiscal year before they could report GAAP-compliant financial statements. In addition, some issuers said a limited availability of auditors of governmental entities could impede issuers from complying with a mandated annual reporting time frame.auditors, in which case the local government might have little to no control over the timing of the audit. Other states use private-sector auditors, and several issuers told us that there is a shortage of these auditors. For example, an issuer from Wisconsin noted that major accounting firms have reduced staff resources supporting public-sector audits, and smaller auditing firms also have moved away from government audits. Some states require their local governments to use state MSRB staff discussed with us their perspectives on possibly requiring issuers to provide annual financial information on a timelier basis. They said the diversity of the issuer community and significant impediments to implementing such an option would need to be evaluated before putting in place such a requirement. In response to market participant concerns that information needed to make informed investment decisions is stale in many cases, MSRB recently developed features in EMMA that allow issuers, obligated persons, and parties providing disclosure upon their behalf, to voluntarily specify a time frame of 120 or 150 days for submitting annual financial information. Three market participant groups we interviewed and 1 of 21 market participant groups we surveyed told us federal regulators should require issuers to disclose unaudited financial information on a quarterly basis in EMMA, similar to requirements for corporate issuers. According to 3 market participant groups, more frequent reporting could help provide investors with more timely, relevant information. Four of seven large issuers and three of six small issuers told us they posted interim financial information to their websites, including unaudited quarterly financial statements and budget reports. Three other small issuers that produced interim financial reports told us they did not post such information on their websites, but could provide it to investors or others on request. Issuers that already produce interim financial information could face minimal cost to submit it to EMMA. Additionally, members of a market participant group representing issuers said unaudited interim financial information might be easier for issuers to prepare than annual audited financial reports (for issuers that fulfill their annual reporting obligation as agreed in their continuing disclosure agreement by providing audited annual financial statements) and provide investors with more current and relevant information. One market participant group representing issuers told us smaller issuers could have a greater incentive to disclose interim financial information for the benefit of investors out of competitive pressure as more issuers adopted the practice. However, issuers and a market participant group indicated that a requirement for issuers to provide quarterly information to EMMA could be costly, would involve liability concerns, and could result in a limited presentation of financial information that excludes information on accrued assets and liabilities. Some large, small, and conduit issuers with whom we spoke said preparing interim financial information for EMMA would require additional staff resources and, with governments’ limited resources, likely would result in issuers reallocating staff resources from other areas. For example, several large issuers and conduit issuers told us they and others would need to hire additional accounting staff if required to provide standardized quarterly financial reports. Large and small issuers also cited concerns about their liability under SEC’s antifraud authority of posting unaudited financial information to EMMA, and a market participant group suggested that issuers would be more willing to disclose interim information if they could disclaim liability from the antifraud provisions of the federal securities laws. SEC staff told us that issuers and others cannot disclaim liability or responsibility for their disclosures under the antifraud provisions. Finally, several large and small issuers with whom we spoke and 1 of 21 market participant groups we surveyed said significant adjustments that some government entities make only at year-end to meet GAAP requirements would make it infeasible to determine an issuer’s financial condition from interim financial reports, as interim information would provide an incomplete picture of an issuer’s financial condition. In a 2010 speech to the Investment Company Institute, the SEC Chairman stated that requiring periodic disclosure of financial information—such as tax revenues, expenditures, tax base changes, or pension obligations—could help improve municipal securities disclosure. Further, MSRB staff told us quarterly disclosure could enable investors to better compare different types of securities, as more information would be available for comparative analyses. In addition, they said more frequent disclosure in theory could increase liquidity and improve pricing, but it would be difficult to determine to what extent, and whether, more frequent disclosure would increase liquidity and improve pricing. MSRB staff told us variation among the issuer community also constitutes a significant barrier to mandating more frequent disclosures. They said a tailored approach would be more effective than a one-size- fits-all requirement for issuers to provide more frequent disclosure information. Four of 21 experts and 1 of 21 market participant groups we surveyed and several issuers we interviewed said SEC should require corporate borrowers that issue debt in the municipal market to comply with disclosure requirements for corporate issuers because this sector has been responsible for most payment defaults.the municipal market with corporate borrower participation provide disclosure beyond that required by Rule 15c2-12, not all do. According to SEC staff with whom we spoke, requiring corporate disclosure of conduit borrowers would require certain statutory action to repeal Securities Act exemptions for certain types of securities; however, the Tower Amendment could remain in place. While the Tower Amendment restricts Although a few sectors of SEC from requiring prefiling information from municipal securities issuers, the restriction does not apply to conduit borrowers, as they are not municipal securities issuers. Whether a corporate conduit borrower is subject to registration and reporting requirements for public companies would depend on whether the corporate conduit borrower qualified for a specific exemption under the Securities Act. Two market participant groups and an expert with whom we spoke suggested that applying corporate disclosure requirements to conduit borrowers would provide a risk-based approach to improving disclosure. They said focusing changes of disclosure rules on the highest-risk sectors of the market would improve investor protection in the areas of greatest need. One expert we surveyed said conduit borrowers should be required to provide investors with more information because conduit borrowers benefit financially from reduced interest rates on tax-exempt municipal bonds. The expert said eliminating exemptions for corporate borrowers could provide clarity to investors on what entities issue debt in the municipal market, and could provide investors with access to the same registration and disclosure information that otherwise would be available on the same entities if issuing securities in the corporate market. While eliminating exemptions for conduit borrowers could improve transparency, one small issuer told us there could be some costs to government issuers, as some local governments may be required to assume development costs. Conduit issuers agreed that eliminating exemptions could increase costs to conduit borrowers and cause some to leave the market—in theory, leading to lost economic development opportunities. SEC staff have recommended that the exemption provided by Section 3(a)(2) of the Securities Act be eliminated for corporate conduit borrowers. In 1994, SEC supported this option, but the current commission has not taken a position on this issue. SEC staff have been examining this issue as part of their ongoing study of the municipal securities market. MSRB staff said market participants reported that municipal securities with conduit borrowers in some sectors have been less compliant with continuing disclosure agreements than other types of municipal securities. They said planned improvements to EMMA could help users identify and track conduit issuances, which could aid conduit borrowers in managing their continuing disclosure obligations or help regulators and investors track securities with conduit borrowers. Experts and market participant groups we surveyed and others with whom we spoke suggested other options for improving disclosure that could be implemented within the existing regulatory framework. These included further improving the functionality of EMMA and strengthening efforts to promote EMMA to issuers and investors. Other options included expanding SEC enforcement activities and improving the readability and usefulness of disclosure information by providing guidance or requiring use of plain English in disclosures. Six market participant groups with whom we spoke and 4 of 21 experts and 3 of 21 market participant groups we surveyed told us EMMA was a significant improvement from the former system for distributing disclosure, and 2 market participant groups said its usefulness had improved since it was first implemented. Members of a group representing issuers said the system provided issuers greater certainty about their compliance with continuing disclosure agreements, as EMMA allows issuers to verify what information they submitted to the system and where it was posted online. When using the former system, issuers mailed in paper documents and lacked the ability to see whether information was filed or if it had been categorized correctly. They said EMMA had made it easier to more accurately file disclosure information, as it was easier to associate disclosure information with appropriate identifiers (CUSIP numbers). Nevertheless, 6 market participant groups we interviewed and 3 of 21 market participant groups and 2 of 21 experts we surveyed suggested further improvements to EMMA could benefit disclosure. Suggestions for improving EMMA included making it easier for investors to find specific securities, making it easier for investors to determine whether financial information had been submitted, and ensuring that information was properly coded to appropriate categories and securities. According to one market participant group with whom we spoke, further improving EMMA’s functionality would reduce the time and level of effort required of EMMA users to understand the significance of the information provided. Two market participant groups representing issuers told us that while EMMA has made it easier for them to manage their investor disclosures and determine whether disclosures are publicly available, additional improvements would further increase the functionality and usefulness of the system. MSRB staff agreed that further improving EMMA would encourage greater issuer discipline in complying with continuing disclosure agreements, because functionality improvements to EMMA could provide investors better access to disclosure information and, in turn, increase investor demand for disclosure in EMMA. Four of 21 market participant groups we surveyed and 3 others with whom we spoke said regulators could strengthen efforts to educate issuers on their disclosure responsibilities. For instance, three market participant groups we interviewed told us some issuers have not yet submitted information to EMMA because they might not be aware of their disclosure obligations under their continuing disclosure agreements. Regulators discussed with us their efforts to educate issuers about EMMA. MSRB’s primary education focus for the first year after launching EMMA was to inform and train issuers on their new obligations to file disclosure information to EMMA. According to MSRB staff, these efforts included providing industry conference presentations, developing webinars, creating a call center to provide support to issuers submitting information to EMMA, and posting a list of frequently asked questions on the MSRB website. While these efforts have continued, MSRB has updated its issuer education focus from introducing EMMA to how to leverage EMMA to communicate directly with investors. In November 2011, MSRB launched a toolkit for state and local government issuers on its website, which included information on making continuing disclosure submissions to EMMA and how issuers can better use the information available on EMMA. While MSRB staff view MSRB’s initial issuer education efforts as successful because frequent issuers are aware of EMMA and MSRB received positive feedback from the issuer community, they said additional work was needed to educate infrequent, small issuers. Four of 21 market participant groups and 1 of 21 experts we surveyed also suggested that regulators could strengthen efforts to improve investor awareness of EMMA, as the extent to which individual investors use EMMA is difficult to ascertain. More specifically, 7 of the 12 individual investors with whom we spoke did not use EMMA to obtain disclosure information because a few said they were not aware of EMMA and several said they relied on advisors for investment advice and information instead of conducting their own research. Regulators said they expected investor awareness of EMMA to improve over time, and described the extent of their efforts to make investors aware of EMMA. In 2009, MSRB initiated an education and outreach effort to raise awareness of EMMA among investors and others who act on their behalf, and to promote use of the site by market participants. MSRB has used websites, social media, search engines, print and broadcast media, and public speaking engagements, among other things, to communicate to investors, issuers, and the broker-dealer community about EMMA. MSRB also requires that trade confirmations or other documentation associated with primary market transactions provide notice that primary offering disclosure information (official statements) is available through EMMA. Further, MSRB developed an online education center and in May 2012 launched an investor toolkit on its website. MSRB staff told us they plan to develop focus groups of investors to explore ways to improve EMMA, which could include how to improve investor education efforts. Additionally, SEC, FINRA, and others have promoted EMMA on various websites relevant to investors interested in purchasing municipal securities. Three of 21 market participant groups and 1 of 21 experts we surveyed, and 3 market participant groups with whom we spoke suggested SEC could expand its enforcement activities using its existing antifraud authorities as leverage to improve issuers’ adherence with continuing disclosure agreements. As discussed previously, SEC does not have the authority to directly require issuers to submit continuing disclosure information to EMMA. SEC enforcement actions using its antifraud authority could encourage issuers to comply with their continuing disclosure obligations. For example, one issuer we interviewed said he was careful to comply with the continuing disclosure agreement, as he did not want to risk the city becoming the subject of an SEC enforcement action. An expert and a market participant group we surveyed, and two market participant groups we interviewed discussed the potential benefits of increased enforcement activity. They said a few high-profile enforcement actions could improve disclosure compliance. For example, representatives of a national group that advises issuers on their disclosures said enforcement actions and interpretive releases were their main sources of guidance for preparing or advising issuers about disclosure information. To be held liable under the antifraud provisions, issuers must make a material misstatement or omission in their disclosures or public statements (such as to EMMA or in a speech). SEC has initiated enforcement actions against state and local governments for materially false and misleading disclosures they provided to investors in connection with publicly offered municipal securities. For example, SEC found that the State of New Jersey and the City of San Diego violated antifraud provisions by misstating or omitting material information about the annual funding of their pension obligations, which SEC alleged to be material information on which investors would rely. To strengthen enforcement efforts in the municipal securities market, SEC created a municipal securities and public pensions unit in its Division of Enforcement in January 2010. Initial efforts by the division include identifying market activities that pose the greatest risk to investors and identifying potential violations. Six of 21 experts and 8 of 21 market participant groups we surveyed suggested efforts by regulators to standardize disclosure information could benefit investors by improving the content and readability of disclosure. Their suggestions included that SEC establish disclosure guidance on ways to standardize the organization of information or highlight what information could be important according to the type of security or credit sector. Additionally, an investor suggested regulators develop a one-page template issuers could use to provide information most pertinent to investors, in an easily understood format. Three of 21 market participant groups we surveyed and a market participant group and an investor with whom we spoke said additional guidance or templates could improve the readability and comparability of information disclosed in EMMA, improving investors’ understanding of the information. Additionally, an expert said such guidance, outlining broad categories of basic information all issuers should provide, would be particularly helpful to small or infrequent issuers that lack the resources needed to maintain an awareness of industry changes in disclosure standards. However, many large, small, and conduit issuers with whom we spoke identified potential challenges to providing standardized information. They said standardized formats could require different information from what is collected and maintained now, requiring changes that could impose additional costs on issuers through increased staff time, hiring additional expertise, and associated opportunity costs. Also, 1 of 21 market participant groups we surveyed was concerned that direct regulation of disclosure content and format by SEC or MSRB could have an adverse effect on the quality of disclosure information. That is, standardized information might provide investors with information that was too general to be useful. SEC staff said they have been exploring different ideas to assist municipal issuers in improving disclosure as part of the staff’s ongoing review of the municipal securities market. For example, SEC staff told us SEC could consider having a role in helping issuers determine what types of information would be useful for investors’ decision making. While MSRB had not specifically discussed developing templates for disclosure in its long-range plan for EMMA, staff told us MSRB could consider possible options to help standardize disclosure using its authority over how information gets submitted to EMMA. MSRB staff told us examples could include creating a template for baseline disclosure such as an online form for submitting information to EMMA, or providing guidance or best practices to show patterns of good disclosure and highlight good disclosure practices. Staff also suggested MSRB could consider developing an online library of links to websites with guidance and best practices developed by industry groups and regulators. Three of 21 market participant groups and 1 of 21 experts we surveyed suggested federal regulators should require issuers to use plain English when preparing information for submission to EMMA. For example, they suggested issuers use plain language to describe financial information or the implications of event notices. To some extent, issuers already have been following these practices. Municipal issuers that satisfy their annual reporting obligation (agreed on in their continuing disclosure agreement) by submitting an annual financial report prepared in accordance with GASB rules provide a management discussion and analysis. Additionally, four of seven large issuers and three of seven conduit issuers with whom we spoke had made efforts to incorporate plain language into their annual financial reports and other financial information they posted to their websites. For example, three of seven large issuers said using plain language was a long-time goal. However, large, small, and conduit issuers with whom we spoke said cost factors, including potential liability under SEC antifraud authority, lack of internal expertise, and complex accounting standards, have made it challenging to summarize or interpret the disclosure information they provide in plain language. Because of these factors, one large issuer told us investors should seek assistance from brokers or financial advisors on interpreting financial disclosure information, rather than relying on issuers to provide plain language or summary information. MSRB staff told us it would be difficult to enforce a requirement that information provided to EMMA use plain English, given the number and diversity of municipal issuers. They said it would be easier to mandate use of plain English if SEC had direct authority over municipal issuers. MSRB has taken recent actions to improve the timeliness of disclosure of financial information, the frequency of disclosure, and the completeness of disclosure filings through improvements to EMMA with a focus on the system’s functionality. Examples of recent improvements include the following: Filing date information—MSRB expanded the information underwriters report at the time of an offering to include the date by which issuers agree to provide annual financial information. This information is displayed in EMMA, making lapses in annual disclosure more transparent to users. Voluntary information—MSRB also developed features in EMMA that allow issuers to submit different types of information on a voluntary basis, including monthly budget updates. Additional changes, which became effective May 2011, permit issuers, obligated persons, and parties providing disclosure on their behalf, to provide to EMMA additional categories of information including specifying a timeframe of 120 or 150 days for submitting annual financial information, indicating use of GAAP as established by GASB or Financial Accounting Standards Board (FASB), or providing a web address An issuer’s (URL) where additional financial information is available. agreement to participate in any of these voluntary undertakings would be prominently displayed in EMMA. Rating information—MSRB implemented a direct feed to EMMA of ratings information from two of the rating agencies that currently provide ratings on municipal securities. According to MSRB, the rating agencies voluntarily provide ratings information to EMMA, which is updated automatically. Consequently, EMMA users who previously might not have been aware of rating changes affecting their securities could obtain timely and accurate information. Issuers can indicate voluntary plans to submit to EMMA annual financial information within 120 calendar days after the end of their fiscal year, or as a transitional alternative through 2013, within 150 calendar days after the end of the applicable fiscal year. See MSRB, Long-Range Plan for Market Transparency Products (Jan. 27, 2012), available at http://www.msrb.org/About-MSRB/Programs/Long-Range-Plan-for-Market- Transparency-Products.aspx. feedback through the EMMA website. MSRB’s long-range plan includes improving search capabilities to make it easier for investors to find securities. Planned changes would allow users to find specific securities information using information other than CUSIP numbers, such as keywords, map-based information, or hierarchy-based searches (for example, securities within a given state), and would allow users to conduct advanced searches within disclosure documents. The plan also includes ongoing work with the issuer community to develop additional tools and utilities to help issuers manage their debt portfolios and to promote more comprehensive and timely disclosure. For example, MSRB plans to develop more flexibility for issuers to manage disclosure submissions and their appearance in EMMA. These new EMMA capabilities could enable issuers to compare the disclosure and performance of their securities with their peers’ securities. MSRB also plans to continue promoting awareness of EMMA and provide additional online education information for investors, including how to work with advisors, access pricing information, and use EMMA. SEC’s ongoing study of the state of the municipal securities market has focused on a range of issues such as primary and secondary market disclosure practices, financial reporting and accounting, investor protection and education, and market structure (including pretrade price transparency). SEC staff told us one purpose of the study is to identify risks in the market and what types of changes, if any, might be needed, including changes in the quality and timeliness of disclosure information provided to the market. SEC staff expect to release their staff report in 2012, and include legislative, regulatory, and industry best practices and recommendations to SEC Commissioners for measures to improve primary and secondary market disclosure practices, measures to improve market practices, and associated regulation. In addition, the Dodd-Frank Act required SEC to create an Office of Municipal Securities to administer SEC rules for municipal securities brokers and dealers, advisors, investors, and issuers, and to coordinate with MSRB on rulemaking and enforcement actions. SEC has been in the process of hiring an Office of Municipal Securities director and staff. SEC’s fiscal year 2012 budget provides for five full-time staff; however, as of April 2012, the office had three employees. We provided a draft of this report to SEC, MSRB, and FINRA for comment. SEC and MSRB provided technical comments, which we incorporated, as appropriate. FINRA did not provide comments on the draft report. We are sending copies of this report to the Senate Committee on Banking, Housing, and Urban Affairs and the House Committee on Financial Services. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or clowersa@gao.gov. Contact points for our Offices of Public Affairs and Congressional Relations may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. The objectives of this report were to (1) examine the extent to which information currently provided on municipal securities is useful and the extent to which existing regulation reflects principles for effective disclosure; and (2) identify options for improving the information issuers disclose to investors of municipal securities, and the related benefits and challenges of these options for investors and issuers. To describe the extent to which information is useful for investors, we reviewed documents from the Securities and Exchange Commission (SEC) and the Municipal Securities Rulemaking Board (MSRB), including rulemakings, studies, statistical reports, staff reports, and a plan that described various issues concerning municipal securities disclosure. We reviewed the transcripts of SEC hearings on the state of the municipal securities market held in San Francisco, California; Washington, D.C.; and Birmingham, Alabama, on various dates in 2010 and 2011. We also reviewed 45 comment letters submitted to SEC as of January 25, 2012, regarding its study of the municipal securities market. We conducted a case study of disclosure information that we obtained from MSRB’s Electronic Municipal Market Access (EMMA) system and made observations about EMMA’s ease of use, the completeness of the disclosed information, and the ease with which certain information could be found in the disclosures. As EMMA became the central repository for primary market and continuing disclosures in July 2009, we reviewed 14 issuances that were offered between July and December 2009. The issuances reviewed included 2 issued by small governmental entities (a school district and a fire district), 2 issued by medium-sized governmental entities (a utility district and a city), and 2 issued by large governmental entities (a state and a city education board) for general-obligation debt issuances. One issuance was a general-obligation bond issued for a public hospital. The remaining 7 consisted of conduit issuances for a variety of projects, including an airport, solid waste facility, multifamily housing complex, stand-alone hospital, hospital system, continuing care retirement community, and nursing home. We also reviewed independent and academic studies on the usefulness of disclosure information and default studies from the three largest rating agencies as well as data on municipal securities defaults from an independent research firm to understand the risks to investors of municipal securities. We used data from Standard & Poor’s and Moody’s Investors Service to compare default rates for U.S. municipal issuers and global corporate issuers rated by each rating agency. We assessed the reliability of these data and found it to be reliable for this purpose. In addition, we worked through the American Association of Individual Investors to identify and interview 12 retail investors with diverse investment experience with municipal securities. We also interviewed institutional investors (including representatives for eight investment companies), professional analysts, a rating agency, an independent research firm, and groups representing market participants, including broker-dealers, bond lawyers, and municipal advisors. Finally, we interviewed staff of federal and state regulators, including SEC, MSRB, the Financial Industry Regulatory Authority (FINRA), and the North American Securities Administrators Association. To describe the extent to which the information that issuers must provide reflects principles for effective disclosure, we reviewed federal laws and rules, agency regulations, and interpretive guidance that set forth disclosure requirements related to municipal securities. We reviewed SEC Rule 15c2-12, the primary SEC rule relating to underwriters of municipal securities. We reviewed information on SEC regulations for insider trading and that establish fair disclosure requirements for corporate securities to determine their applicability to municipal securities. We also reviewed SEC’s antifraud authorities in the Securities Act of 1933 and the Securities Exchange Act of 1934, as well as provisions of these acts that exempt municipal securities from SEC registration and periodic reporting requirements. In addition, we reviewed MSRB’s facility filing on EMMA, which establishes requirements for submitting disclosure information to the system. We compared these requirements with principles for effective disclosure and had two analysts review and come to independent judgments to determine the extent to which disclosure regulations reflected the principles. We used two sources for criteria. First, we used Principles for Ongoing Disclosure and Material Development Reporting by Listed Entities from the International Organization of Securities Commissions. We believed these principles to be appropriate criteria for use in this context because our data collection indicated that continuing disclosure was a key issue for municipal securities disclosure. Although trading of municipal securities in the secondary market is infrequent, trading volume is substantial, indicating the importance of continuing disclosure. We did not use one of the principles—simultaneous and identical disclosure—in our analysis because the principle referred to making disclosures across borders, which is not important for municipal securities because the market is largely domestic and the securities do not trade on exchanges. Second, we used SEC’s A Plain English Handbook: How to Create Clear SEC Disclosure Documents, which sets forth principles for preparing disclosure documents in easy-to-understand language. We believed these principles to be appropriate criteria for municipal securities disclosure because our data collection indicated that readability was an issue for investors and a national organization representing state and local governments had suggested the principles to its members for producing municipal securities disclosure documents. To identify options for improving the information issuers disclose to investors, we reviewed compliance and enforcement information from SEC and FINRA, including examination manuals for Rule 15c2-12 and MSRB Rule G-32, which set forth broker-dealer requirements for disclosures in connection with primary offerings. We reviewed data on examinations that found violations of the rule, and in certain cases, reviewed examination reports. Furthermore, we surveyed experts and groups representing issuers and other market participants, such as municipal advisors, broker-dealers, and professional analysts. The questions we asked experts focused on the regulation of municipal securities disclosure, whereas the questions we asked market participant groups focused on disclosure practices. This is because our initial interviews with market participant groups illuminated conflicts of interest that made it challenging to discuss options for regulating municipal securities disclosure. We recruited experts with career expertise in the municipal securities market and without obvious conflicts of interest, which we defined as the potential to benefit personally or professionally from the outcomes of our study or with a constituent they might feel the need to satisfy. Surveys for both groups asked for options to improve disclosure. We used a nonprobability sampling method to identify and select experts by obtaining referrals from other market participants, experts, and regulators. Although our results are not generalizable, our survey covered a diverse group of experts and market participant groups with broad and differing perspectives. We administered the survey to 26 experts and 29 market participant groups and received responses from 21 experts and 21 groups. We analyzed options according to what was mentioned most frequently and excluded suggestions that were not based on a correct understanding of the existing disclosure regime or were beyond the scope of our review. To control for small variations across the suggestions, three analysts reviewed and came to independent judgments to assign suggestions into various categories. To identify the types of benefits and challenges related to suggested options, we interviewed 20 issuers in three groups representing (1) large and frequent issuers, (2) small and infrequent issuers, and (3) conduit issuers. The group consisting of large and frequent issuers included representatives of three states, three large cities, and a county. The group of small and infrequent issuers included representatives of five cities and a county with populations of 500,000 or fewer. The group of conduit issuers included representatives of three state housing finance agencies, three state health and educational facilities agencies, and a state bond bank. The issuers were geographically diverse, and represented entities from: California, Colorado, Florida, Georgia, Kansas, Maine, Maryland, Minnesota, New York, Oregon, Pennsylvania, Rhode Island, South Carolina, Tennessee, Texas, Washington, and Wisconsin. We also drew on information obtained from our survey and other interviews we conducted with investors, regulators, and market participants. We conducted this performance audit from June 2011 to July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 2 compares key federal disclosure requirements for municipal and corporate securities. The information in the table is organized according to whether requirements apply to primary or secondary market disclosure, with requirements that apply to both presented first. We compared requirements for continuing disclosure in the Securities and Exchange Commission’s (SEC) Rule 15c2-12 and SEC’s antifraud authorities with principles for effective disclosure that were developed by an international organization of securities commissions, which included SEC, and certain plain English principles developed by SEC. We found that the current municipal securities disclosure requirements broadly reflect the seven principles for effective disclosure. Investors, a market participant group, and an expert told us that the reporting of pension liability in municipal securities disclosure documents could be improved with changes to accounting standards and financial reporting requirements. For example, several investors told us that Governmental Accounting Standards Board (GASB) standards, to the extent issuers adopted them, did not provide for reporting enough information, such as financial projections, that could help users determine whether an entity will have sufficient resources to cover future financial obligations. Consequently, there are concerns that future pension costs could crowd out an entity’s ability to meet scheduled principal and interest payments on its municipal securities. GASB and the National Association of Bond Lawyers have undertaken various efforts that could increase the amount of information reported on pension and other long-term liabilities, but the viability of certain proposals already has been questioned, and it is too early to determine how issuers will react to recent guidance produced by an industry coalition. In November 2011, GASB issued suggestions for broadening the information governmental entities report in annual financial reports to include projections of cash flow and long-term financial obligations (that is, bonds, pensions, other postemployment benefits, and long- term contracts) for a minimum of 5 years beyond the reporting period. GASB issued these suggestions for public comment and anticipated that respondents could be sharply divided on the issue. Two of the seven board members did not agree with the suggestions. They said forward-looking financial information would be subjective, and they questioned the potential costs and benefits to governmental entities of preparing the information. They had concerns that the proposed suggestions could affect the timeliness of audited financial statements and some entities’ willingness to report statements compliant with generally accepted accounting principles, or GAAP (for those entities that can choose whether to comply with GAAP). National Association of Bond Lawyers, Considerations in Preparing Disclosure in Official Statements Regarding an Issuer’s Pension Funding Obligations, (May 15, 2012). Available at http://www.nabl.org/uploads/cms/documents/pension_funding_obligations_document_5- 18-12_b.pdf, as of May 18, 2012. The revised standards, approved as Statements 67 and 68, replaced previous pension plan reporting standards (Statements 25, 27, and 50). Specifically, Statement 67 revised guidance for the financial reports of most pension plans, effective for periods beginning after June 15, 2013, and Statement 68 established new financial reporting requirements for most governments that provide their employees with pension benefits, effective for fiscal years beginning after June 15, 2014. appropriate way to measure and accrue cost for these obligations) had become widely divergent. In addition to the contact named above, Karen Tremba, Assistant Director; Heather Chartier; William R. Chatlos; Rachel DeMarcus; Melissa Kornblau; Courtney LaFountain; G. Michael Mikota; Patricia Moye; Alise Nacson; Barbara Roesmann; and Kathryn Supinski made key contributions to this report. | Municipal securities are debt instruments that state and local governments issue to finance diverse public projects. As of March 31, 2012, individual investors held up to 75 percent of the total value of municipal securities outstanding. These securities are exempt from certain federal disclosure requirements applicable to other securities sold publicly. Disclosure provided in the primary market, where these securities are issued, generally consists of official statements. Continuing disclosure is information provided in the secondary market, where these securities are bought and sold after issuance. The Dodd-Frank Wall Street Reform and Consumer Protection Act required GAO to review the information issuers of municipal securities must disclose for the benefit of investors. This report addresses (1) the extent to which information currently provided on municipal securities is useful for investors and the extent to which existing regulations reflect principles for effective disclosure, and (2) options for improving the information issuers disclose to investors of municipal securities. To conduct this work, GAO reviewed disclosure rules and compared them with principles for effective disclosure cited by SEC and the International Organization of Securities Commissions, surveyed selected experts and market participants, and interviewed issuers. GAO provided a draft of this report to SEC, MSRB, and the Financial Industry Regulatory Authority (FINRA). SEC and MSRB provided technical comments, which GAO incorporated, as appropriate. FINRA did not provide comments. Market participants indicated that primary market disclosure for municipal securitiesofficial statementsgenerally provides useful information, but investors and market participants cited a number of limitations to continuing disclosures. The most frequently cited limitations were timeliness, frequency, and completeness. For example, investors and other market participants said that issuers do not always provide all the financial information, event notices, or other information they pledged to provide for the lifetime of a security. While GAO's analysis of current regulatory requirements for municipal securities disclosure found that they largely reflected the seven principles of effective disclosure, regulators and market participants said that there are some limitations on the enforceability and efficiency of the regulations. However, the effect of these limitations on individual investors largely is unknown because limited information exists about the extent to which individual investors use disclosures to make investment decisions. Nevertheless, regulators remain concerned about this market, in part due to its size and the participation of individual investors. As discussed below, the Securities and Exchange Commission (SEC) and Municipal Securities Rulemaking Board (MSRB) have been taking or plan to take actions to improve disclosure. Experts and market participant groups GAO surveyed suggested options for improving disclosure, some of which would require statutory changes while others could be achieved within the existing regulatory framework. One suggested statutory change was the repeal of the Tower Amendment, which some experts believed would allow federal regulators to directly require issuers to make disclosures, but SEC and MSRB staff did not agree. The Tower Amendment prohibits SEC and MSRB from requiring issuers of municipal securities to file certain materials with them. While MSRB and SEC staff said that repealing the Tower Amendment would remove the prohibitions on requiring issuers to file certain materials with them, they noted that it would have no real effect on what they can require issuers to disclose because municipal issuers are exempt from SEC registration and MSRB does not otherwise have affirmative authority to regulate municipal issuers. Other suggestions from experts and market participant groups requiring statutory changes included mandating accounting standards and requiring the submission of financial information at intervals more frequent than annually. Experts and market participant groups suggested other options to improve disclosure that could be achieved within the existing regulatory framework, including further improving and promoting MSRB's Electronic Municipal Market Access (EMMA) system, which since July 2009 has served as the official central repository for disclosures about municipal securities. While experts and market participants said that EMMA had greatly improved their access to information on municipal securities, many suggested that further enhancements to EMMA would increase the usefulness of the system to investors and issuers. MSRB issued a plan in January 2012 to improve EMMA and recently has taken steps to enhance EMMA's functionality. Further, SEC staff indicated their plan to release a staff report in 2012 to include recommendations on measures to improve primary and secondary market disclosure practices, market practices, and associated regulation. |
To develop information on the basic differences between the master file and the NMF, we reviewed relevant IRS documentation, including information on the number and types of accounts processed on the NMF; interviewed IRS officials at the National Office, at the Executive Office for Service Center Operations in Cincinnati, OH, and at 3 of IRS’ 10 service centers—Atlanta, GA; Cincinnati; and Fresno, CA; and obtained information from NMF managers at all 10 service centers through a questionnaire. We also observed NMF transactions being processed at the Atlanta and Cincinnati Service Centers. We visited Atlanta and Fresno because they were the two centers with the most NMF accounts at the time we did our work; the Assistant Director of the Fresno Service Center also headed a task force that reviewed the NMF. We visited Cincinnati because of its proximity to the Executive Office for Service Center Operations. To identify problems that IRS and taxpayers have experienced with the NMF, we obtained data through the questionnaire and interviews described in the preceding paragraph as well as interviews at two of IRS’ four regional offices—Western and Midstates—and at two IRS district offices—the Georgia District, headquartered in Atlanta, and the Ohio District, headquartered in Cincinnati. We visited the Western Region to interview the problem resolution analyst who had reviewed the NMF case that was discussed at the Senate Finance Committee hearings. We visited the Midstates Region to meet with Internal Audit staff who were doing related work. We visited the district offices in Atlanta and Cincinnati because of their proximity to other IRS locations at which we were doing work. We also reviewed several IRS reports on the NMF, including a December 1997 report on the results of an internal IRS review of the NMF case that was discussed at the September hearings and a February 1998 report on the NMF by a task group from the Fresno and Ogden Service Centers. We discussed the NMF with problem resolution officers, who are responsible for resolving taxpayer complaints, at the Atlanta, Fresno, and Cincinnati Service Centers. Although we identified several limitations of the NMF, we were unable to determine the extent to which taxpayer problems could be traced back to those limitations because IRS had no data that would allow such an analysis. To determine what IRS has done and plans to do to address the problems caused by the NMF, we identified corrective actions that were recommended as a result of the various IRS reviews and discussed the status of those actions with responsible IRS officials. IRS identified several actions that it said it implemented in early 1999, as we were completing our audit work. We did not verify that those actions were taken or assess their effectiveness in correcting past problems. We did our work from January 1998 to February 1999 in accordance with generally accepted government auditing standards. Before automated data processing, IRS maintained all tax accounts on ledger cards. In 1962, this system was replaced with the master file. Although the master file was an improvement over the manual system, it could not process certain accounts because of system limitations. These accounts, referred to as NMF accounts, were kept on ledger cards until 1991, when the NMF was automated. The automated NMF consists of 10 stand-alone databases—one in each of IRS’ 10 service centers. Each service center has an NMF unit with 5 to 21 staff who enter account information into that center’s database and otherwise manage the system. As of September 8, 1998, according to IRS, there were a total of about 122,000 accounts on the NMF. (See app. I for the number of staff in each service center’s NMF unit and the number of accounts in each center’s NMF database.) There are two general reasons why IRS puts accounts on the NMF. Some accounts have features that do not fit with the master file’s configuration; other accounts have to be processed more quickly than the master file processing procedures allow. Because it is smaller and decentralized, the NMF can handle accounts that the master file cannot and can process transactions faster than the master file. According to IRS, of the 122,000 accounts on the NMF as of September 8, 1998, about 82 percent involved either split assessments or employee plans. Split assessments are accounts that were originally on the master file for a joint entity (e.g., a husband and wife who filed a joint income tax return) but later had to be split into separate accounts. For example, application of the innocent spouse provisions of the tax law can relieve one spouse of all or some of the total tax liability assessed against a married couple who filed a joint return. That would require IRS to separately assess each of the spouses for the amounts that they legally owe. These accounts have to be set up on the NMF because the master file is not configured in a way that allows accounts to be linked to one another and does not allow IRS to separately assess and bill the filers of a joint return. According to IRS, of the 122,000 accounts on the NMF as of September 8, 1998, about 71,000 (58 percent) involved split assessments. Employee plan accounts are on the NMF, according to IRS, so that IRS can assess excise taxes related to the plans. IRS officials told us that there is no place on the master file to enter the employee plan number, which is needed to assess the excise taxes. Thus, the accounts are put on the NMF, which is configured to accept employee plan numbers. IRS data indicate that about 29,000 (24 percent) of the 122,000 NMF accounts as of September 8, 1998, involved employee plans. According to IRS, most of the remaining NMF accounts as of September 8, 1998, fell into one of the following five categories. IRS had no data on the number of accounts in each of these categories. New legislation: The NMF permits rapid implementation of new tax laws that may require extensive, and thus time-consuming, modifications to the master file. Because it is much smaller than the master file (as discussed later), the NMF can be quickly changed to handle these new laws. Overflow accounts: These are accounts that have more transactions than the master file is configured to handle. When the physical size of an account exceeds the master file configuration, IRS is to transfer the account to the NMF, which has no constraint on an account’s size. Large dollar amounts: These are accounts with dollar balances that exceed the space allotted in the master file for an account’s dollar amount. In that regard, the master file is not configured to handle accounts with balances of $100 million or more. The NMF has no such limitation. Immediate assessments: These are accounts that must be assessed more quickly than the master file process allows. Generally, these assessments involve situations where IRS has determined that the assessment or collection of a deficiency will be jeopardized by delay. In such cases, IRS is authorized to immediately assess such deficiency. These accounts are processed on the NMF because, according to IRS, the NMF can process assessments in 24 to 36 hours, while the master file takes several weeks. Reversal of erroneous abatements: These are accounts in which an assessment is needed to correct some clerical action that had erroneously reduced (abated) a taxpayer’s tax liability, and the statute of limitations for assessments had expired. As configured, the master file prevents the reversal of abatements after expiration of the statute of limitations. As indicated by the information in the previous section, most of the accounts on the NMF are there because of limitations in the master file’s configuration (e.g., the master file’s inability to handle split assessments, large accounts, and employee plan numbers). Other accounts (e.g., those involving new legislation and immediate assessments) are on the NMF because they have to be processed more quickly than is possible on the master file. There are two basic reasons why accounts can be processed more quickly on the NMF than on the master file. First, the NMF is much smaller than the master file and, thus, easier to work with. The NMF had 122,000 accounts as of September 8, 1998, and those accounts were spread among stand-alone systems in each of IRS’ 10 service centers. By comparison, the master file is one large system, housed in Martinsburg, WV, that has an account for every taxpayer that files a return—about 200 million in 1998. Second, the process IRS follows to enter account data into the master file and make the updated information available for researching taxpayer accounts is much more time-consuming than the NMF process. For master file purposes, account data flow from the service centers, where the data are initially received and validated, to IRS’ computing center in Martinsburg, where the data are posted to the master file. Data coming into Martinsburg from the individual service centers are not posted to the master file upon receipt. Instead, data are accumulated during the week for posting on weekends. Martinsburg sends output from the posting process back to the service centers for their use in updating the Integrated Data Retrieval System (IDRS). IDRS is the primary system that IRS employees use to research and update accounts. For example, IRS’ customer service representatives use IDRS to access accounts in responding to taxpayer inquiries. According to IRS, the process from the time data are sent to Martinsburg until updated account information is available on IDRS can take from 4 to 6 weeks. The NMF process is more streamlined and thus quicker. After receipt and validation by the service center, NMF account data are sent to the NMF unit in that same service center for immediate input to the NMF. According to IRS, that process generally takes about 1 day. There is no movement of data between the service center and Martinsburg, and NMF data are generally not input to IDRS (the one exception to this rule, delinquent accounts, will be discussed later). Although the NMF enables IRS to process accounts that cannot be processed on the master file, the NMF also had limitations, at the time of our review, that caused problems for IRS employees and taxpayers. While the NMF’s decentralization allowed employees to quickly enter account data and make more timely assessments than would be possible with the master file, it also limited the ability of employees to research NMF accounts. The ability to research NMF accounts was further limited by the absence of any meaningful link between the NMF and IDRS. The decentralized NMF system also involved many manual procedures and computations that increased the risk of error and delayed some processing. These problems could adversely affect the ability of IRS staff to do their jobs, including their ability to provide accurate service to taxpayers with accounts on the NMF. IRS staff need the ability to research account data. Customer service representatives, for example, need that capability so they can respond to taxpayer inquiries about their accounts and any related correspondence they may have received from IRS. Revenue agents and revenue officers need research capability in conjunction with their audit and collection case work. At the time of our review, it was difficult for IRS staff to research NMF accounts because, as described by an IRS task force, IRS staff often had “difficulty identifying that the account is, in fact, in the NMF and then in determining which of the ten service centers has control of the account.” A significant barrier to any research effort involving NMF accounts was the absence of a central repository of all such accounts. Each of the 10 service centers maintains its own NMF system on a stand-alone database. Even though these systems are basically the same, they are not linked in a way that facilitates easy access. A unique password is needed to process and research accounts on each of the 10 NMF databases. IRS staff in one service center cannot access other centers’ NMF accounts without going through the time-consuming process of obtaining a password from the system administrator at each center. NMF account research was also hampered by the fact that only delinquent NMF accounts were on IDRS. As noted earlier, IDRS (1) facilitates research and, ultimately, the resolution of account questions by giving IRS staff instantaneous access to accounts that are on IDRS and (2) allows IRS staff to adjust accounts on-line. These advantages were not available to IRS staff dealing with the NMF, because NMF accounts were only put on IDRS after they were classified as delinquent and because, even with delinquent NMF accounts, IRS staff were limited in how they could use IDRS. According to IRS officials, delinquent NMF accounts, unlike delinquent master file accounts, were put on IDRS for reference purposes only. Any transactions involving delinquent NMF accounts still had to take place on the NMF. The lack of a central repository of NMF accounts created a situation in which customer service representatives and other IRS staff were either not aware of an NMF account or had to contact the NMF units in as many as 10 service centers to see if such an account existed. As users of the system, NMF staff told us of their frustration in not having universal access to all NMF accounts. They contended that the absence of universal access made researching accounts difficult and that, as a result, some staff were likely to forgo this process and not learn of the existence of an NMF account. In its report on the NMF, an IRS task force said that this research limitation resulted in numerous instances in which NMF accounts were not identified and had even resulted in erroneous refunds to taxpayers. Even if IRS staff had universal access to all NMF accounts, there still would have been no assurance that they could more effectively respond to inquiries from NMF taxpayers. That is because IRS staff, at the time of our review, often did not realize that the account in question was an NMF account. When a taxpayer with an NMF account receives a notice from IRS, the taxpayer’s Social Security number on that notice is to end with an “N.” However, according to IRS, taxpayers are typically unaware that their accounts are on the NMF and that the “N” after their Social Security number indicates an NMF account. As a result, when they call IRS about their account and are asked for their Social Security number, they typically do not include the “N.” Thus, at the time of our review, customer service representatives who infrequently came into contact with NMF accounts may not have known to search the NMF databases and, as a result, may have given the taxpayer incorrect information. The problems IRS staff encountered in trying to identify and access NMF accounts could have resulted in problems for taxpayers. The NMF-related case that was discussed in the 1997 Senate Finance Committee hearings involved a taxpayer who encountered several problems over several years in trying to get accurate information from IRS about the status of her account. Among other things, according to an IRS review of the case, a customer service representative had overlooked the “N” after the taxpayer’s Social Security number and did not search the NMF for the taxpayer’s account. Even if the account had been identified as an NMF account, the customer service representative would not have had access unless the account happened to be in that particular service center’s NMF. We were unable to determine the extent to which other taxpayers encountered problems in trying to get information about NMF accounts because IRS had no data that would allow such an analysis. According to IRS officials, they have recently taken some corrective actions to resolve the problems with identifying and accessing NMF accounts. These actions are discussed later in this report. As discussed earlier, one of the benefits of the NMF is the ability to do things more quickly on it than on the master file. That benefit derives from the fact that each service center can directly enter data into its own NMF database instead of having to spend time sending data to a centralized location and waiting for the data to be processed and available for use. Although a service center’s ability to enter data directly into its own database has advantages, having 10 such databases can cause significant problems when, for example, taxpayers move from one service center’s jurisdiction to another. In these circumstances, IRS would have to manually transfer the account from one center to the other. To accomplish this, staff at the former center would manually prepare an account transfer-in form, attach an account transcript, and mail both to the latter center, whose staff would have to manually key the account data into that center’s NMF. According to IRS, this manual transfer process could take from 4 to 6 weeks. In discussing this process, an IRS task force noted that “many cases are transferred from one service center to another, resulting in temporary loss of visibility, delayed actions, and lost paperwork.” Not only are the NMF databases not linked, but they also do not interface with the master file. As with transfers between service centers, account transfers between the master file and the NMF require manual intervention, which can take several weeks. The need for manual intervention to transfer accounts from the master file to the NMF is problematic because (1) most of the accounts on the NMF (such as those involving split assessments) were originally posted to the master file and (2) there may be many more such occurrences as a result of the innocent spouse provisions in the IRS Restructuring and Reform Act of 1998. In commenting on a draft of this report, IRS noted that a programming change has been scheduled that should reduce the time it takes to transfer accounts from the master file to the NMF. According to IRS, that change is scheduled for implementation in January 2000. The time-consuming manual transfer processes increase the risk that information will not be readily available to respond to taxpayer inquiries or that taxpayers will be given incorrect information in response to their inquiries. For example, account activity may be taking place while the account is being moved from the master file to the NMF or from one service center’s NMF to another. According to IRS, while the account is being manually transferred and not visible, IRS staff could mistakenly make refunds to the taxpayer when, in fact, an outstanding balance remains on the NMF account. Also, if payments are received or assessments are made in the former center after the account has been transferred, the related documents are to be mailed to the latter center for entering into the system, thereby resulting in additional delays. Another significant part of the NMF process that staff have had to handle manually involves the computation of penalties and interest. Before automation of the NMF in 1991, penalties and interest for all NMF accounts were computed manually. Shortly after automation of the NMF, IRS discovered that the system was incorrectly calculating penalties and interest in some cases. NMF staff then had to manually compute penalties and interest and enter the results into the system. According to NMF staff responsible for manually computing penalties and interest, the process is laborious, and it takes a long time to develop the full range of technical skills needed to make the computations. Staff at one of the service centers told us that the computation of penalties and interest was the biggest problem they had with the NMF. After the September 1997 Senate Finance Committee hearings, IRS undertook the following reviews of the NMF: In November 1997, a problem resolution analyst in IRS’ Western Region was tasked with identifying and reviewing all problems and mistakes that occurred in IRS’ handling of the NMF case that was discussed at the hearings. She reported on the results of her review in a December 1997 report that was submitted to IRS’ Deputy Commissioner. In December 1997, a group from the Fresno and Ogden Service Centers was tasked with studying the NMF process and recommending corrective actions that could be implemented in the short term. That group issued its report in February 1998. In December 1997, another group, chaired by IRS’ National Director of Submission Processing, was formed to address longer term solutions. In November 1997, IRS’ Office of Internal Audit began a review directed at determining whether NMF transactions were recorded accurately and timely. As of February 28, 1999, Internal Audit was finalizing a report on its results. Those reviews identified the many systemic and procedural problems previously discussed and generated many recommendations for corrective action. (See app. II for a list of the recommended corrective actions and information on their status as of Feb. 17, 1999.) Included among those recommended actions were some that, if they were to be implemented immediately, would require a significant amount of computer reprogramming at a time when IRS has higher priority programming work associated with making its systems Year 2000 compliant. As a result, IRS deferred until at least 2001 the implementation of two recommendations that called for (1) moving many NMF accounts to the master file and (2) consolidating NMF accounts in one service center. IRS also adopted an alternative to two other recommendations that called for (1) loading all NMF accounts on IDRS and (2) changing the command code used by IRS staff to search the IDRS database. There were several other recommendations, such as adding a unique toll- free telephone number on NMF notices and enhancing the technical expertise available to IRS staff working on NMF accounts, that required little or no reprogramming. According to IRS, those recommendations were generally implemented in January and February 1999. Although IRS’ corrective actions address the major problems identified with the NMF, they do not include any steps directed at (1) monitoring the NMF to identify any problems that arise in the future and (2) ensuring that timely action is taken to address any such problems. Probably the most significant corrective action proposed by IRS would move accounts involving split assessments and employee plans from the NMF to the master file. As noted earlier, about 82 percent of all the accounts on the NMF in September 1998 fell into one of those two categories. Implementation of this action, which would reduce the number of NMF accounts considerably, would require extensive reprogramming of the master file. According to a cost estimate prepared by IRS’ Office of Information Services, which is responsible for any computer programming needed to implement the various recommendations, reprogramming the master file to accept split assessment accounts would require about 680 days of staff time at a cost of about $185,000. (An estimate for reprogramming the master file to accept employee plan accounts was not available at the time we did our work.) This effort was put on hold because, according to IRS officials, the Commissioner of Internal Revenue requested that any efforts requiring extensive reprogramming be reconsidered given the need to give priority to Year 2000 compliance efforts. IRS’ current schedule calls for implementing this recommendation in 2001. Moving accounts involving split assessments and employee plans to the master file should improve customer service not only to the taxpayers whose accounts are moved to the master file but also to the remaining smaller number of NMF taxpayers. Movement of these accounts to the master file is a critical first step that, as discussed later, could have implications for other proposed corrective actions. However, based on our past work on the challenges facing IRS in trying to meet its Year 2000 requirements, we believe it was reasonable for IRS to delay this action until after Year 2000 changes are made. Consolidation of all NMF accounts at one service center would improve service to NMF taxpayers, enable IRS to provide more consistent treatment of NMF taxpayers, and facilitate the correct resolution of taxpayer problems by IRS staff. Specifically, with all accounts located at one center, accessibility and identification of accounts would be less of an issue. NMF staff would not have to research databases in up to 10 centers to locate an account. Accounts would less likely be overlooked, and consolidation would eliminate the need for different passwords. There would no longer be a need to transfer documents between NMF databases and to delay account updates during the time-consuming transfer process. In discussing the possibility of consolidating all NMF accounts at one service center, an IRS task force said that IRS would need to secure software and a new computer to house the consolidated database at an estimated cost of about $250,000. Also, according to the task force, there would be additional costs for salary, benefits, and training. IRS officials told us that consolidation has been put on hold until after some of the other corrective actions are implemented. Specifically, officials said that they would like to consider consolidation after they are more certain which accounts are going to be moved to the master file. At that time, there will be a better sense of how many NMF accounts will be left. In that regard, even if IRS proceeds with its plans to move accounts from the NMF to the master file, it believes, as do we, that there will continue to be a need for a system, such as the NMF, for the remaining accounts—at least until future systems modernization efforts produce a different form of master file. We agree that a decision about consolidation would best be made after deciding which, if any, accounts will be moved to the master file. At that time, IRS should have a better idea of the number of accounts that will have to be consolidated and a better basis for determining whether consolidation is necessary. For example, if accounts involving split assessments are moved to the master file, as is the current plan, the need for consolidation may be less persuasive because, according to IRS, those are the NMF accounts that are most likely to involve transfers between service centers. If IRS should decide to proceed with consolidation after moving certain accounts to the master file, the cost might be much less than estimated by the task force because existing equipment and staffing may be sufficient to handle the smaller number of NMF accounts. Two corrective actions proposed by an IRS task force called for loading all NMF accounts on IDRS and modifying a command code used by customer service representatives to search the IDRS database. These two actions, in concert, were intended to make it easier for IRS staff to identify and access an NMF account. The first action would eliminate the current NMF access problem by allowing anyone with access to IDRS to research information on all NMF accounts. The second action would cause all of a taxpayer’s accounts on IDRS (including any NMF account) to be reflected when a customer service representative enters the taxpayer’s nine-digit Social Security number. At the time of our review, if an “N” was not entered after the Social Security number, the customer service representative did not receive any prompt identifying the existence of an NMF account. Because of concerns about the amount of resources that would be required to implement these two recommendations, IRS’ Office of Information Services developed an alternative that, according to IRS, was implemented in January 1999. Under that alternative, a specific transaction code (130) is to be generated automatically on the master file when an account is opened on the NMF, certain identifying information from that account, such as the taxpayer’s name and Social Security number, is to be entered into IDRS, and a flashing “N” is to be added to IDRS to denote the existence of an NMF account. Information Services’ alternative also included the establishment of an automated NMF National Account Index. As proposed by Information Services, a one-time extract of all open NMF accounts would be used to assemble the NMF National Account Index, and the file would be updated weekly to add new NMF accounts. Thus, the NMF National Account Index would be a central compilation of all NMF accounts. Information Services told us that because of the extensive reprogramming that would be required to modify IDRS to handle all NMF accounts, the establishment of the NMF National Account Index and the use of a specific transaction code to identify an NMF account would provide an effective short-term option to modifying command codes and loading all cases on IDRS. According to Information Services, the two originally proposed corrective actions were intended to alert IRS staff to the existence of NMF accounts and to expedite the research of those accounts. Information Services’ alternative would address both concerns. Specifically, automatic generation of transaction code 130 should better ensure that anyone accessing a master file account is alerted to the existence of a related NMF account. In the past, that transaction code was to be entered manually, which left open the possibility that it would mistakenly not get entered. Also, the flashing “N” should increase visibility of the existence of NMF accounts, and the NMF National Account Index should help expedite research by listing all NMF accounts and their service center location. Although the Executive Officer for Service Center Operations, who is responsible for day-to-day NMF operations, agreed to the above alternative proposed by Information Services, he would eventually like to see all NMF accounts on IDRS. This is primarily because the National Account Index would contain only certain account information, such as the taxpayer’s name and Social Security number, for identification purposes. Thus, the National Account Index would not give IRS staff the same research capabilities as would be available if all account information were available on IDRS—similar to what is now available on IDRS for delinquent NMF accounts. We recognize that modifying IDRS to accept all NMF accounts would require extensive reprogramming, and we agree that the option proposed by Information Services would help to alleviate some problems with the NMF. We are concerned, however, that the proposed option would add to an already complex system another stand-alone database that would not have enough information on NMF accounts to help staff quickly resolve taxpayers’ problems. The need for IRS to modify IDRS to accept all NMF accounts may be less critical, however, if IRS proceeds with its plan to significantly reduce the size of the NMF by moving split assessments and employee plans to the master file. The cost of loading the remaining accounts on IDRS after such a move might exceed any potential benefits. In addition to the corrective actions discussed above, IRS developed a plan that called for implementing a number of other corrective actions in January 1999 that would require little or no reprogramming. Those actions included (1) adding a unique toll-free telephone number to NMF notices, (2) enhancing the technical expertise available to IRS staff who are working with NMF accounts, and (3) improving penalty and interest computations. In its February 1998 report, a task force of Fresno and Ogden Service Center staff commented on the need for a unique toll-free telephone number on NMF notices. The task force said that if recipients of an NMF notice called one of the general toll-free numbers that taxpayers were told to call if they had a question, there was a great possibility that they would reach an IRS call site that did not have direct access to the service center where the NMF account was located, thus necessitating another telephone call for the taxpayer or a delay in making contact with the correct NMF site. Additionally, if the taxpayer failed to identify the “N” after the Social Security number, the customer service representative might not search for an NMF account. This could result in a search of only the master file, which could lead to misinformation being given callers about the status of their accounts. According to IRS officials, IRS started including a unique toll-free telephone number on NMF-related notices in February 1999. Each call to that number is to be routed to a particular service center based on the area code of the incoming call and is to be answered by specially trained customer service representatives. This change should help NMF taxpayers who have questions about their accounts reach someone at IRS with access to and knowledge about NMF accounts. In response to recommendations in one of the internal NMF studies, IRS has taken steps to enhance the technical expertise available to IRS staff who are working with NMF accounts. That enhancement involves the (1) identification of district office staff who will function as NMF coordinators in addition to their normal responsibilities and (2) establishment of a new position (NMF account specialist) in the service centers. The district office NMF coordinators are to provide technical assistance to district staff, disseminate NMF listings and reports for coordination with district officials and service center staff, coordinate responses back to the service center NMF units, and provide continuing education for district staff. The service center account specialists are to provide technical assistance on NMF issues and individual accounts and coordinate with the district office coordinator. On the basis of our work at the NMF unit in the Atlanta Service Center and at the Georgia District Office, IRS’ actions, when implemented, should improve IRS employees’ basic understanding of the NMF. The technical assistance to be provided by the service center account specialists is important because we found that while staff in the NMF units understood the NMF assessment process, they were limited in their overall understanding of the NMF. Similarly, the availability of help from district office coordinators should help district office staff who have to work with NMF accounts because, as noted by IRS officials, those staff are exposed to NMF accounts so infrequently that they have difficulty gaining expertise. As of January 1999, according to IRS, it had made programming changes to correct the automated NMF penalty and interest computations. Even with this new programming, however, NMF staff still have to manually compute penalties and interest when the data needed to correctly make the computations are not on the NMF. For example, according to IRS documentation, NMF staff manually compute penalties and interest when (1) the Tax Court or Bankruptcy Court has ordered interest charged at a different rate than normally charged by IRS or (2) an adjustment is made to a taxpayer’s tax liability with no adjustment to penalties and interest. In those situations, according to IRS officials, the NMF has been modified to flag the account so that the system does not attempt to compute penalties and interest. NMF staff are to make the computations and enter the results into the NMF system. In an attempt to alleviate the need for manual computation and reduce the risk of error, the National Office, in August 1998, circulated a commercial off-the-shelf software package for use in computing penalties and interest when the NMF system could not compute them. However, NMF staff told us, and the National Office confirmed, that the service centers stopped using the software because it did not compute penalties and interest accurately. As a result, there remain a number of situations in which penalties and interest must be manually computed and entered into the NMF system. The various corrective actions discussed earlier and listed in appendix II seem to address all of the major problems identified with the NMF and, if effectively implemented, should result in a dramatic drop in the number of NMF accounts. There is nothing in IRS’ plan, however, that seeks to prevent the NMF from growing again and eventually causing other problems for other taxpayers. For example, there is nothing in the plan about IRS’ (1) monitoring the NMF to identify any problems that arise in the future and (2) ensuring that timely action is taken to address any such problems. After the Senate hearings, IRS made a concerted effort to identify NMF problems and potential solutions. IRS identified major deficiencies with the NMF and compiled a list of corrective actions that, if effectively implemented, should go a long way toward correcting those deficiencies. Understandably, those actions that are expected to require substantial computer programming (including perhaps the most significant action— moving numerous NMF accounts to the master file) have been deferred until after higher priority programming work is done. In the meantime, IRS has taken some significant actions, such as making NMF accounts more easily identifiable and researchable and putting a unique toll-free telephone number on NMF notices. Although it is too soon to assess their effectiveness, those actions should help IRS provide better service to taxpayers with accounts on the NMF. IRS has populated the NMF with accounts, such as those involving split assessments, that could have been kept off the NMF or moved from the NMF if IRS had made the necessary programming changes to the master file. While we recognize that the number of NMF accounts is small in relation to the total number of accounts IRS has to process and maintain, that is little comfort to taxpayers whose accounts happen to be among that small number. With that in mind, we believe that IRS’ action plan lacks a key component. We saw nothing in the plan about (1) monitoring the NMF to identify any problems that arise in the future and (2) ensuring that timely action is taken to address any such problems. We recommend that the Commissioner of Internal Revenue direct appropriate officials to institute procedures to (1) monitor future activity in the NMF to identify any problems that arise in the future and (2) ensure that timely action is taken to address any such problems. We obtained written comments on a draft of this report in a March 25, 1999, letter from the Commissioner of Internal Revenue (see app. III). IRS said that it agreed with the findings and recommendation in the draft report. IRS also emphasized that it had taken immediate action to address the problems encountered by taxpayers with accounts on the NMF and the resulting issues that surfaced during the Senate Finance Committee hearings in September 1997. IRS noted, however, that “the ultimate solution is the fundamental replacement of the entire master file system and until such time as this occurs, we will continue to be at risk for additional deficiencies to be identified.” IRS pointed out that this ultimate solution is scheduled to be implemented over the next several years and that, until then, it “will continue to monitor the NMF process to identify any problems and take immediate steps to mitigate them.” With respect to our recommendation, IRS said that it had included provisions to monitor future activity on the NMF in its January 1999 revision of that part of the Internal Revenue Manual dealing with the NMF. According to IRS, these procedures, in addition to increased monitoring by National Office staff, will ensure that timely action is taken once a problem is identified. However, the procedures referred to by IRS are directed at improving controls over individual accounts on the NMF. The intent of our recommendation was more global. We believe that IRS needs to institute some mechanism that will enable it to proactively identify and correct situations, such as the existence of a large number of accounts on the NMF that could and should be moved to the master file. The presence of such a mechanism, for example, might have triggered action by IRS to do something about the large number of split assessment accounts in the NMF before being prodded in that direction by congressional hearings. IRS also provided updated information on certain issues discussed in the report as well as updates on the status of its various corrective actions. We revised the body of this report and appendix II to reflect those updates. We are sending copies of this report to Representative Charles B. Rangel, the Committee’s Ranking Minority Member; Representative Amo Houghton, Chairman, and Representative William J. Coyne, Ranking Minority Member, of the Committee’s Subcommittee on Oversight; and Senator William V. Roth, Jr., Chairman, and Senator Daniel P. Moynihan, Ranking Minority Member, Senate Committee on Finance. We are also sending copies to The Honorable Robert E. Rubin, Secretary of the Treasury; The Honorable Charles O. Rossotti, Commissioner of Internal Revenue; and The Honorable Jacob Lew, Director, Office of Management and Budget. Copies will be made available to others on request. Major contributors to this report are listed in appendix IV. Please contact me on (202) 512-9110 if you have any questions. Status Deferred until 2001. To be considered after IRS is certain which accounts will be moved to the master file. Actions 3 and 4 have been combined and an alternative was developed that was implemented in January 1999. The alternative to this action is the development of the NMF National Account Index, which was developed and implemented in January 1999. Implemented February 8, 1999. Positions created but not filled as of February 17, 1999. Programming to correct penalty and interest computations implemented in January 1999. Manual computations still required in some cases. Completed in January 1999. Completed in January 1999. Rather than establish a separate district coordinator position, the intended duties of that position are to be assigned as collateral duties to existing staff. Implemented January 1999. Oversight group established. Completed in January 1999. Completed in January 1999. Catherine H. Myrick, Evaluator-in-Charge John M. Gates, Senior Evaluator Carrie M. Watkins, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the Internal Revenue Service's (IRS) non-master file (NMF), which IRS established to process taxpayer accounts that cannot be processed on its master file, focusing on: (1) the basic differences between the master file and the NMF; (2) known problems that IRS and taxpayers have been experiencing with the NMF, including the sources of such problems; and (3) recent IRS proposals and actions intended to address these problems. GAO noted that: (1) IRS uses the NMF for accounts that either the master file is not configured to process or that must be processed more quickly than can be done through the master file; (2) compared to the master file, the NMF is newer and smaller (about 122,000 NMF accounts scattered among 10 decentralized databases vs. millions of master file accounts in one large centralized system); (3) the NMF is more flexible than the master file, and IRS' procedures for entering data into and processing accounts on the NMF are more streamlined and thus quicker than those for the master file; (4) although the NMF enables IRS to process certain accounts that cannot be handled by the master file, the NMF also had limitations, at the time of GAO's review, that caused problems for IRS staff and taxpayers; (5) GAO's review and IRS' studies revealed that the most significant limitations were: (a) the lack of a central repository of all NMF accounts; (b) the absence of any meaningful link to the automated system that IRS staff use to obtain information about taxpayers' accounts; and (c) the fact that the NMF processing procedures were predominately manual; (6) these limitations made it difficult for IRS staff to identify and access accounts and could cause delays in processing account information in some situations; (7) these access problems and processing delays could cause taxpayers whose accounts were processed on the NMF to receive incorrect information and experience poor customer service; (8) after the September 1997 Senate Finance Committee hearings, IRS undertook several reviews of the NMF and developed a plan that included numerous proposed corrective actions; (9) implementation of some significant proposed actions has been deferred until at least 2001 because those actions involve extensive computer reprogramming that could interfere with IRS' efforts to make sure its computer systems are year 2000 compliant; (10) recognizing the need to make improvements in the near term, however, IRS recently implemented other actions that required fewer resources and little or no reprogramming; (11) if effectively implemented, IRS' near-term actions, in conjunction with the actions that have been deferred, should go a long way toward correcting identified NMF problems; (12) however, IRS' action plan lacks a key component; and (13) there is nothing in the plan about IRS': (a) monitoring the NMF to identify any problems that arise in the future; and (b) ensuring that timely action is taken to address any such problems. |
A VTS system is one of several methods for improving navigational safety and protecting the marine environment. It helps determine the presence of vessels in and around ports and it provides information to vessels on such matters as traffic, tides, weather conditions, and port emergencies. Other safety measures include training vessel operators, improving navigational aids (such as buoys and markers), dredging wider and deeper channels, and inspecting vessels. Under the authority of the Ports and Waterways Safety Act of 1972, as amended, the Coast Guard operates VTS systems in eight ports around the United States. Operations and maintenance costs for these systems, which totaled about $19 million in fiscal year 1995, are borne by the Coast Guard and are not passed on to the ports or the shipping industry. Two other ports, Los Angeles/Long Beach and Philadelphia/Delaware Bay, have radar-based systems funded by their users. These systems are sometimes called “VTS-like” systems to distinguish them from the Coast Guard’s systems, but for consistency, we refer to them as VTS systems in this report. In 1995, operations and maintenance costs were about $1.4 million for the Los Angeles/Long Beach system and about $345,000 for the Philadelphia/Delaware Bay system. Study of VTS systems was prompted by the Oil Pollution Act of 1990 (P.L. 101-380), passed after the 1989 Exxon Valdez oil spill and subsequent spills in the coastal waters of Rhode Island, the Delaware River, and the Houston ship channel. The act directed the Secretary of Transportation to prioritize the need for a new, expanded, or improved VTS system at U.S. ports and channels. Under criteria for this evaluation, the act specified that in assessing the need for a VTS system, the Secretary consider (1) the nature, volume, and frequency of vessel traffic; (2) the risk of collisions, spills, and damages associated with that traffic; (3) the impact of installing, expanding, or improving a VTS system; and (4) all other relevant costs and data. The resulting report, called the Port Needs Study, was submitted to the Congress in March 1992. Although the Coast Guard’s VTS 2000 proposal is the result of several years of study, the development of VTS 2000 itself is in its early phases. The Coast Guard is just entering those phases of its planning schedule in which the Coast Guard will (1) finalize the list of ports where it believes a VTS 2000 system should be built and (2) determine the specific mix and number of VTS 2000 components for these ports. At six of the eight ports we reviewed, most key stakeholders we interviewed said they had little or no involvement in VTS 2000. The following is a brief summary of what has occurred to date. The Port Needs Study identified two sets of locations as possible candidates for a VTS system. Both sets were identified on the basis of an estimate of the net benefits of installing a new VTS system at each location. The first set, which included seven locations, was recommended for initial consideration. For these locations, the study’s methodology showed that the benefit of a new or improved VTS system would consistently be higher than costs even when different assumptions were considered, such as decreasing benefit estimates by 50 percent or increasing cost estimates by 50 percent. The second set, comprising eight other locations, was identified as the next best candidate for consideration. These locations were not as consistent in showing positive net benefits when the methodological assumptions were changed. Table 1 shows the 15 locations and the estimated net benefits calculated for each one. In addition to the 15 ports in table 1, the Coast Guard added San Francisco, California, and Valdez, Alaska, because both locations currently have Coast Guard-operated VTS systems and because the Coast Guard wants to upgrade the equipment at these ports with VTS 2000 technology. Many of the ports have existing VTS systems or other nonradar, radio-based information systems to assist vessel operators, and when the estimated benefits of these systems are taken into account, the marginal net benefits of a new system decrease substantially in some instances. The study’s data indicate that over the first 15 years after a switch to a new system, there may be little marginal net benefit in making the conversion at any of the ports with existing radar-based VTS systems. The five Coast Guard-operated systems have either recently been upgraded or enhanced or are scheduled to receive upgrades in the near future regardless of any decision on VTS 2000. These upgrades or enhancements will expand VTS coverage and cost about $39 million for improved software and equipment. According to Coast Guard officials, these ports are included in VTS 2000 so that existing VTS equipment can be replaced when it becomes obsolete. Officials indicated that they will address the timing and affordability of this approach in fiscal year 2000. The Coast Guard is conducting follow-on studies at a number of the locations to verify whether the benefits of a new VTS system outweigh its costs. So far, five such studies have been completed for Boston, Corpus Christi, Mobile/Pascagoula, Philadelphia/Delaware Bay, and Tampa. The follow-on study for Mobile/Pascagoula was consistent with the results of the Port Needs Study. However, for Boston, the marginal net benefits no longer outweigh the costs, and for Corpus Christi, Philadelphia/Delaware Bay, and Tampa, the marginal net benefits are higher. (See app. II for more information on the Port Needs Study and follow-on studies.) The Coast Guard developed an initial proposal in fiscal year 1993 to address the Port Needs Study. The Coast Guard said that the expanded or enhanced use of VTS systems would reduce the risk of maritime accidents and support other Coast Guard activities, including national defense and law enforcement. Through greater automation of vessel traffic data under VTS 2000, the Coast Guard also expected to more efficiently carry out its waterway management responsibilities. In fiscal year 2000, the Coast Guard will decide how many ports will be included under VTS 2000. In all, 17 ports are under consideration. Seven of the ports have existing radar-based VTS systems—two operated privately (Los Angeles/Long Beach and Philadelphia/Delaware Bay) and the remaining five operated by the Coast Guard. In addition, three ports have privately funded radio-based information systems (Baltimore, Corpus Christi, and Port Arthur/Lake Charles). The estimated cost of VTS 2000—$260 million to $310 million—is based on the cost of (1) developing the system and (2) installing it in all 17 locations. The system’s development—including activities such as developing the software, designing the system, testing, contracting, constructing the land-based support facility, and developing the system engineering of VTS 2000—is being pursued in four phases. The estimated cost of the initial development phase is $69 million, including costs incurred since the program’s inception. This phase is scheduled for completion in fiscal year 1999 and, according to Coast Guard officials, will result in operational capability similar to that of the upgraded VTS systems currently operated by the Coast Guard. The development of all phases will cost an estimated $145 million if the systems are installed in all 17 ports. If all phases are implemented, they are scheduled for completion in fiscal year 2004 and will include activities such as developing software that interfaces with external databases and establishing a facility to test and diagnose software to support a national VTS system (land-based support facility). According to Coast Guard officials, a decision on whether to proceed with all four development phases depends, in part, on the number of sites that receive VTS 2000. The additional cost of equipment and installation at specific ports ranges from about $5 million to $30 million per port area. The Coast Guard, which is in the early phase of the acquisition process, plans to select a single systems integration contractor for the project by the first quarter of fiscal year 1997. The contractor will develop computer software, procure hardware (radar, closed circuit television, and radios), integrate these components of the system, and determine what type of VTS 2000 equipment will be installed at each port. The Coast Guard estimates that the contractor will be needed through 2006 if systems are installed in all 17 locations. In the next few years, as it moves to acquire and install VTS 2000 systems at specific locations, the Coast Guard plans to increase the size of its funding requests for the program. The Coast Guard has received about $25 million to develop VTS 2000 through fiscal year 1996. For fiscal year 1997, the Coast Guard plans to request $6 million. For fiscal years 1998-2004, the Coast Guard estimates that it will need about $30 million a year to support both the development and installation of VTS 2000 systems in ports. The contractor for VTS 2000 is scheduled to complete the systems’ development in 2004 as it upgrades sensors, develops software, and establishes interface capability with up to 10 different databases. Starting in 1998, the Coast Guard plans to install the first systems in New Orleans and Los Angeles/Long Beach. Starting in 2000, it plans to install systems in Port Arthur/Lake Charles, Houston/Galveston, and Corpus Christi. After systems are installed at the initial sites, the Coast Guard will enhance and upgrade the systems as necessary. In June 1995, several federal agencies, including the Coast Guard, commissioned a study by the Marine Board of the National Research Council to assess the implementation of advanced information systems for maritime commerce. Among other things, the Marine Board will address the role of the public and private sectors in developing and operating VTS systems and will examine user fees and trust funds as possible funding sources. The Marine Board expects to issue an interim report in June 1996, and the Coast Guard plans to use the report in decisions on the VTS 2000 project. Given that the Coast Guard is not yet at the point of determining what VTS 2000 equipment will be installed at each port, it is perhaps not surprising that many key stakeholders we interviewed said they had little or no involvement in VTS 2000. At six of the ports we reviewed, most stakeholders we interviewed said they had little or no involvement in the VTS 2000 system at their port in matters such as the system’s needs, design, and cost. Coast Guard officials said that as more specific plans emerge regarding which ports will be included under VTS 2000, they will work more extensively with stakeholders to determine what VTS 2000 components to install at each location. For example, they stated that VTS 2000 systems can be adapted to the needs of stakeholders in each port. Notwithstanding this lack of specific involvement in VTS 2000, most stakeholders we interviewed believed they knew enough to provide their opinions about the system. Their level of knowledge was based, in part, on briefings about VTS 2000 conducted by the Coast Guard in six of the eight ports. At three of the locations (Philadelphia/Delaware Bay, Mobile/Pascagoula and Tampa), follow-on studies included interview sessions with port and industry officials on VTS-related issues. San Francisco was the only port among the eight we reviewed where a majority of the stakeholders interviewed did not think they knew enough about the system to provide an opinion about whether it was needed at their location. Widespread support was lacking for VTS 2000 among the shipping industry, pilots’ association, and port authority stakeholders we interviewed. The opinions about the need for a VTS 2000 system were predominantly negative at five ports, were about evenly split at two others, and were predominantly uncertain at one. (See table 2.) Many who opposed VTS 2000 perceived the proposed system as being more expensive than needed. The level of support for VTS 2000 was even lower when key stakeholders were asked if they would be willing to pay for the system, perhaps through fees levied on vessels. At six of the eight ports, a clear majority of stakeholders was not willing to fund VTS 2000. At the remaining two—Houston and San Francisco—support was mixed among the stakeholders we interviewed. However, among those who supported VTS 2000, many said their support was conditional. For example, some stakeholders in San Francisco said that they would be willing to fund the system if the alternative were to have no VTS system at all. One concern expressed by some stakeholders about funding a system was that a user fee could affect the competitiveness of their port. Many port and industry stakeholders commented that a user fee could cause some vessel owners to divert cargo to other ports. Other stakeholders indicated that a fee would probably not precipitate such a decision if the amount were reasonable. Although most of the stakeholders we interviewed voiced little support for VTS 2000, they did express stronger support for a more limited form of VTS at most of the eight ports. (See table 3.) Support for some form of VTS was generally present at six ports, mixed at one, and completely absent at one (Mobile/Pascagoula). Opinions about paying for such a system were generally supportive at five ports (two were already doing so), mixed at two, and negative at one. At the four ports with existing VTS systems (Houston/Galveston, Los Angeles/Long Beach, Philadelphia/Delaware Bay, and San Francisco), interviewed stakeholders thought the systems were important to vessel safety. At Los Angeles and Philadelphia, where privately funded systems are in place, most stakeholders said they regarded the existing systems as sufficient. In a January 1996 memo, the Commander of the local Coast Guard district stated that the Los Angeles system is a highly professional waterway management tool effectively meeting the needs of the port and the Coast Guard. He noted that in broad terms, the Los Angeles system is entirely consistent with the vast majority of technical specifications identified in VTS 2000 operational documents; he favors admitting the system into the Coast Guard’s national VTS network. At Houston/Galveston and San Francisco, where the Coast Guard’s VTS systems are in place, stakeholders were generally pleased with the safety and service information provided by the current system but had concerns about the cost of a VTS 2000 system. At two of the four ports where no form of VTS currently exists (New Orleans and Tampa), most of the stakeholders said some form of VTS, which they perceived to be less expensive than VTS 2000, was needed. At Tampa, for example, many stakeholders believed that a radar-based system would not be the most cost-effective alternative, and some preferred a system based on satellite technology (called a dependent surveillance system) that allows operators to determine the position of their vessel. At New Orleans, proposals from stakeholders included setting up manned watchtowers to monitor traffic in key areas of the Mississippi River. At Port Arthur, views were about evenly mixed as to whether a more limited VTS system was needed. Some stakeholders thought that VTS would be valuable in certain areas, but not in the entire Port Arthur/Lake Charles area identified in the Port Needs Study. Of the four ports, Mobile/Pascagoula was the only one where stakeholders thought no VTS system was needed. Most of the stakeholders said they did not believe a VTS system was needed because of the low volume of deep-draft traffic in the Mobile area. As a result, these stakeholders generally regarded the current procedures as adequate. These procedures include such measures as permitting only one-way traffic in certain areas and maintaining communications with other vessel operators in the region. As table 3 showed, views on funding such a system were mixed. In general, because stakeholders we interviewed perceived that other VTS alternatives could be less costly than VTS 2000, they were somewhat more disposed to consider paying for a VTS alternative. However, others were not willing to pay for a system. At New Orleans, for example, some stakeholders objected to funding a service that would benefit users passing through the port to other destinations because these stakeholders believed the users might be difficult to identify and charge for the service. As with VTS 2000, some stakeholders were concerned about whether charging user fees would affect the competitiveness of their port. Most stakeholders at most of the ports we visited raised concerns that could affect the establishment of privately funded VTS systems. These concerns include the private sector’s ability to fund the initial start-up costs of such a system, the private sector’s exposure to liability, and the Coast Guard’s role in planning and overseeing a privately funded system. Most key stakeholders we interviewed at three of the six ports that do not have a privately funded VTS system were concerned that if local VTS systems are to be funded by the user community rather than through tax dollars, lack of adequate financing may pose a barrier. The start-up costs depend on the size and complexity of the system, but buying radar equipment, computer hardware and software, and operations space could cost $1 million or more for a system. Financing the systems at Los Angeles/Long Beach and Philadelphia/Delaware Bay posed similar concerns, and both projects received federal or state financial assistance. The state of California provided a low-interest loan of $464,550 to help pay capital costs, and the ports of Los Angeles and Long Beach each provided $250,000 in grants for VTS equipment. The Marine Exchange of Los Angeles/Long Beach, which operates the system, uses Coast Guard property at no cost. For operators of the Philadelphia/Delaware Bay system, the Commonwealth of Pennsylvania provided a $100,000 grant to help upgrade radar equipment in 1986, and Pennsylvania and Delaware authorized pilotage fee increases in 1995 to pay for further upgrades costing more than $1 million. To provide you with additional information on this issue, we contacted representatives from five foreign locations with VTS systems that charge port fees or user fees to pay for VTS operations. At four of the five locations, the central government paid for all or part of the cost of developing and installing the VTS system. For example, the Port of Rotterdam’s VTS capital costs of $180 million were paid both by the central government (66 percent) and by the local government (34 percent). At the Port of Marseilles, France, capital costs totaled about $3.5 million, of which the port paid 66 percent and the central government paid the remaining 34 percent. The central governments of these two countries agreed to pay the development and installation costs as part of their oversight role and their recognition of the need for VTS systems in their country. The Port of London was the only port where capital costs were paid entirely by the port authority. Most funding for this system comes from harbor fees. Liability protection for private operators of a VTS system was a widespread concern among those we interviewed. Coast Guard and privately funded VTS systems generally supply only advisory information, such as vessel traffic or environmental conditions; control of the vessel remains with the master of the vessel. However, most port and industry stakeholders we interviewed at the six ports that do not have a privately funded system were concerned that private VTS operators would be liable if inaccurate information given by the VTS operations center led to an accident. Privately funded VTS systems in both Los Angeles/Long Beach and Philadelphia/Delaware Bay receive liability protection under state laws except in cases of intentional misconduct or gross negligence. At the foreign locations we contacted, officials said that exposure to liability from operating VTS systems had not been raised as an issue because the master or captain of the vessel has ultimate responsibility for the safe navigation of the vessel. Directives from the VTS operator generally come only when a mechanical failure in the ship occurs or when a situation requires immediate safe traffic management. However, all ports noted that since the area of VTS operator liability has yet to be tested in a court of law, a precedent has not yet been set. At one port, an official noted that the port authority carries third-party insurance ($75 million per incident) as protection from accidents occurring under VTS guidance. At locations such as Tampa and San Francisco, where the possibility of operating privately funded systems has been discussed, stakeholders we interviewed believe that securing liability protection is a key issue that must be resolved before they would move forward to establish a VTS system. The Coast Guard’s legal counsel has said that the Coast Guard’s exposure to liability in jointly operated systems does not differ appreciably from that in other, more formally established, Coast Guard-operated vessel traffic services. If there is no Coast Guard involvement with the privately funded VTS, no federal liability would stem from the actions of Coast Guard personnel. The Ports and Waterways Safety Act of 1972, as amended, provided that the Coast Guard may “construct, operate, maintain, improve or expand” VTS systems; however, the act does not address what role, if any, the Coast Guard should play in privately funded systems. At seven of the eight ports we reviewed, most stakeholders said the Coast Guard should play a role with the private sector in developing privately funded VTS systems, including establishing operating standards. Among the reasons for the Coast Guard to be involved, the stakeholders cited the Coast Guard’s regulatory authority to require mandatory participation, the need for consistent and unbiased operations, and the Coast Guard’s expertise in and experience with other VTS systems. For example, the consensus of stakeholders in Tampa was that industry, the state, and the Coast Guard should jointly determine the need for a system. A report produced by the state of Florida states that “any interim [VTS] system should be established in conjunction with the Coast Guard since a system without Coast Guard support will have no real authority and may not conform with other U.S. Coast Guard systems.” While support for the Coast Guard’s involvement in privately funded systems was widespread, opinions were somewhat divided over what form this involvement should take. The two ports that currently have privately funded systems tended to differ in how they saw the Coast Guard’s role. At Los Angeles/Long Beach, where the Coast Guard provides personnel for helping to run the system, the executive director of the marine exchange said this arrangement gives the system greater viability in performing its operations. Local Coast Guard officials said they also benefit from the system, since it can assist them with other duties, such as waterway management, search and rescue operations, and law enforcement activities. Private operators of the Philadelphia/Delaware Bay system believed that the Coast Guard had a role in private systems but in a more limited capacity. For example, with the Philadelphia/Delaware Bay VTS, Coast Guard personnel do not participate as VTS operators, but frequent communication on issues of mutual concern occurs between the private operators and the Coast Guard’s Marine Safety Office. For example, the VTS operators would notify the Coast Guard if a navigation buoy were reported to them as being missing or in the wrong location. However, operators of the system also said that the Coast Guard should have the authority to approve and set the standards for operating a system. At the foreign locations we contacted, the central government played a role in most of the locally or privately operated systems. At three of the four locations where the local government or port authority operates the system, the central government established the operating regulations. Officials said that the role of the central government was to provide regulatory control and oversight to ensure standard procedures for operating the VTS systems in their country. “Statutory and/or regulatory changes are needed to support the development of public-private partnerships for VTS systems. The Coast Guard would need either broad authority to accept reimbursement for personnel it provided, or the authority to approve or sanction non-federal VTSs. Formal certification of VTS-like facilities and development of standard operating procedures would also make sense. They are both good business practices and would enhance the safety and quality of VTS operations.” Difficult choices need to be made about installing and improving VTS systems in the nation’s ports. Important questions about the VTS program currently remain unanswered, including how many ports need the system, how much it will cost, and whether other cost-effective solutions are available. At the same time, there is an acknowledged need to improve waterway safety. The available information indicates that several ports under consideration are likely to realize substantial benefits from the installation of VTS systems, and at many ports we visited, stakeholders appeared interested in making improvements—and, in some locations, perhaps paying for them—if the economic soundness of such improvements can be demonstrated. An immediate and essential next step is for the Coast Guard to more aggressively open lines of communication with key stakeholders at ports under consideration for VTS 2000. This communication is essential in either securing support for VTS 2000 or in developing possible alternatives. Such alternatives could include Coast Guard-operated systems or upgrades that are less extensive than VTS 2000 systems or systems built and operated by the private sector. To encourage more private-sector participation in VTS operations, however, several other issues would need to be resolved, including ways to provide financial assistance, liability protection, and an overseer role for the Coast Guard. We recommend that the Secretary of Transportation direct the Commandant of the Coast Guard to take the following steps regarding the VTS 2000 program: To help ensure that the user community has adequate opportunity to provide its views, interact more closely with key stakeholders before making a final decision on the number of ports that will receive VTS 2000 systems. This interaction could be achieved by discussing the need for the system in each location, allowing local officials to participate in designing the system’s configuration, or discussing other waterway safety measures that may obviate the need for a VTS 2000 system in their port. Discussions should also include the level of support that exists for privately funded systems and factors (such as financial assistance and liability indemnification) needed to facilitate their establishment. The Coast Guard should report to the Congress on the potential for privatization and the actions needed to develop privately funded systems. Given the (1) high development costs for the program (estimated at up to $145 million) and (2) the large number of proposed sites that show relatively low net benefits from acquiring new VTS 2000 systems, determine whether the safety benefits of VTS 2000 can be achieved more inexpensively by installing other VTS systems, perhaps patterned after existing, recently upgraded Coast Guard systems. To ensure that the operation of privately funded systems is consistent with the Coast Guard’s responsibility for marine safety and the marine environment, determine with input from industry and other stakeholders, the Coast Guard’s appropriate role in overseeing privately funded systems and seek authorization from the Congress to implement this role. We provided a draft of this report to officials from the Department of Transportation and the Coast Guard for their review and comment. We discussed the report with these officials, including the Coast Guard’s VTS 2000 Project Manager, Office of Acquisition, and the Chief of the Vessel Traffic Management Division, who generally agreed with the report’s findings and said they would consider the report’s recommendations. They provided comments that clarified the cost of developing VTS 2000, which we have incorporated into the report. We performed our work from August 1995 through March 1996 in accordance with generally accepted government auditing standards. A detailed description of our scope and methodology appears in appendix III. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days after the date of this letter. At that time, we will send copies to the Secretary of Transportation; the Commandant of the Coast Guard; and the Director, Office of Management and Budget. We will make copies available to others on request. Please contact me at (202) 512-2834 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix IV. Described below is information on the type of vessel traffic at each port we visited, the navigational difficulty for each port, and a description of the current vessel traffic service (VTS) system at each port. For ports with Coast Guard-operated systems, we also supply information on the upgraded or enhanced systems. Galveston Bay marks the entrance from the Gulf of Mexico that leads to ports such as Houston, Galveston, and Texas City. This large, irregularly shaped, shallow body of water is about 30 miles long and 17 miles wide at its widest part. Because the bay is generally only 7 to 9 feet deep, deeper-draft vessels must use a 400-foot-wide, 40-foot-deep dredged channel to reach their inland port destinations. Vessels destined for the Port of Houston travel a total of 53 miles up the bay and ship channel to reach their destination, while Galveston- and Texas City-bound vessels transit only 11 miles and 16 miles, respectively. Other factors that affect navigation in this region include fog conditions and tidal changes, which can be exacerbated by wind conditions. The volume and type of traffic transiting this region add to the navigation challenges noted above. The Houston/Galveston Bay area ranks third among U.S. ports for its handling of crude oil and second for its handling of other petroleum products. This area is one of the busiest ports in the U.S. as well. For example, according to a Coast Guard official, over 17,000 deep-draft and 97,000 barge transits operated under VTS Houston in 1994. Under the authority of the Ports and Waterways Safety Act, the U.S. Coast Guard established a VTS system for the Houston/Galveston area in 1975. The Coast Guard staffs the VTS system with at least one supervisor and four vessel traffic controllers for each watch 24 hours a day, 7 days a week. The Coast Guard’s operating costs for the VTS system were about $3.2 million for 1995. Participation in the VTS system is mandatory for all power-driven vessels over 131 feet long, vessels greater than 26 feet long engaged in towing, and vessels certified to carry 50 or more passengers. On average, about 340 vessels use the Coast Guard’s VTS services on a daily basis. In 1995, the Coast Guard completed a $700,000 enhancement that provided the Houston VTS system with one additional radar site. According to a Coast Guard official, this addition filled a gap in the VTS system’s area coverage that had previously affected the Coast Guard’s ability to monitor vessel traffic in the upper Galveston Bay/Redfish Bar area. The Coast Guard also plans to develop a VTS 2000 system for the Houston/Galveston area by 2000. The Coast Guard’s estimated costs for a VTS 2000 system in Houston/Galveston include about $8.8 million in acquisition, construction, and improvement costs and about $3 million in annual operating costs. The ports of Los Angeles and Long Beach are located within San Pedro Bay, a body of water separated from the open sea by a 7-mile-long breakwater. After entering the bay, maritime traffic travels to one of the many deep-water berths located in this 15,000-acre, man-made harbor. According to Coast Guard officials, despite the relatively high marine traffic volume in the harbor, the area is not considered difficult to navigate, as it is relatively free of navigation hazards and weather problems, except for occasional fog. Together, the ports of Los Angeles and Long Beach are responsible for the highest container tonnage of any port in the nation. In fiscal year 1994, these ports received 7,933 commercial vessels, transporting more than 103 million tons of cargo, including automobiles, petroleum products, and other bulk products. In addition, the Port of Los Angeles supports a cruise ship industry. Since early 1994, the Marine Exchange of Los Angeles and Long Beach have been operating a vessel traffic information service (VTIS) system. This system, initially established as an interim system until the Coast Guard could build its own VTS, was developed with financial assistance from the ports of Los Angeles and Long Beach and a loan from the state of California. The geographic area covered by the system extends out to 20 miles offshore. On the basis of an agreement with the harbor pilots, VTIS does not advise vessels within the breakwater, although it has that capability. State law requires all vessels of a certain size to participate in the system. For example, ships over 300 gross tons participate in the system. The annual operating costs of VTIS, currently about $1.4 million, are covered by user fees levied on vessels using the system’s services. Fees currently range from $180 to $340 per entry into the VTIS area, depending on the size of the vessel. The Coast Guard has played an active role in the Los Angeles/Long Beach VTIS since its inception. Initially developed under the Coast Guard’s guidance, the system operates under many of the same rules and procedures that the Coast Guard uses at its own VTS sites. The system uses Coast Guard watchstanders, who along with Marine Exchange personnel, monitor traffic and provide mariners with information. The state of California reimburses the Coast Guard for the use of its personnel. VTIS also provides the Coast Guard with valuable assistance during its search and rescue efforts and law enforcement actions, and VTIS disseminates information on Captain of the Port Orders. The Coast Guard currently plans to build a VTS 2000 system that would be fully operational by 1998 in the Los Angeles/Long Beach area. The Coast Guard estimates that acquisition, construction, and improvement costs will be $4.9 million and that annual operating costs will be about $1.7 million. Mobile, Alabama is about 28 miles inland from the Gulf of Mexico. Deep-draft vessels bound for Mobile from the Gulf use a channel that is at least 400 feet wide for their transit up Mobile Bay. This shipping channel, which runs north and south between the Gulf and Mobile, is dredged to about a 40-foot depth, while the remainder of the bay is generally only 7 to 12 feet deep. Pascagoula, Mississippi, which lies about 24 miles west of Mobile, is also an inland port that requires deep-draft vessels to transit up a narrow channel to reach its harbor area. However, in this location, the transit is only about 10 miles from the Gulf of Mexico. Navigational challenges in this area (in addition to the narrow channel) are presented by two main factors: weather conditions and crossing marine traffic in certain locations. Relatively frequent and strong weather fronts and fog are typical in this region. Frontal systems occur about 20 times per year and are usually accompanied by heavy rain and strong winds. Fog is most problematic in the winter and spring, and visibilities can fall below one-half mile 4 to 8 days per month from November through April. Crossing marine traffic presents a navigational challenge in two locations where the Intracoastal Waterway (a major shipping channel for shallow-draft vessels) crosses the main ship channels leading to Mobile and Pascagoula. Because of the large volume of shallow-draft traffic transiting east and west along this waterway, there is a potential for collisions with shipping channel traffic in this area. As a result, in both locations, the Coast Guard advises vessel operators to exercise particular caution and requests that they make a security call prior to crossing the Intercoastal Waterway, particularly during periods of restricted visibility. Vessel operators make a security call to advise other vessels in the vicinity of their current location and their intended route. Deep-draft vessel traffic in the Mobile/Pascagoula area is relatively light compared with that of larger Gulf Coast ports like New Orleans and Houston. According to Coast Guard information, 1,118 deep-draft vessels arrived at the Port of Mobile and 328 deep-draft vessels arrived at the Port of Pascagoula in 1995. In addition, a significant amount of shallow-draft traffic occurs in this region, according to a Coast Guard official. Counting deep-and shallow-draft shipping together, commodities (by tonnage) being moved in and out of the Mobile area include crude or bulk materials (such as forest products, pulp, and iron ore) (38 percent), coal (32 percent), and petroleum and petroleum products (20 percent). At Pascagoula, 85 percent of the tonnage is petroleum and petroleum products. Currently, no radar-based VTS system monitors vessel traffic in this region. However, port officials in both locations are in contact by radio or telephone with vessels operating in their port to enforce local rules and regulations (such as speed limits) and assign berths to vessels, among other things. The Coast Guard’s plans for VTS 2000 currently include the installation of a VTS 2000 system in this port by 2001. The Coast Guard estimates that costs for the system will be about $5.3 million in facility and equipment costs and $2 million in annual operating costs. The Port of New Orleans, encompassing a 34-mile stretch of the Mississippi River, is one of the largest ports in the United States. This port area serves vessel traffic from three waterway complexes: ocean traffic entering from the Gulf of Mexico, river traffic moving along the Mississippi and Ohio rivers, and vessel traffic from the Intracoastal Waterway. Vessels coming into this port region from the Gulf of Mexico are typically deep-draft vessels, while the river and Intercoastal Waterway vessels tend to be primarily shallow-draft vessels, according to the Coast Guard. According to the Coast Guard, several factors influence the difficulty of navigation in this river port area. The first is geography. For example, blind corners, sharp bends, and strong currents in the Mississippi River make it more difficult for vessel operators to both see each other and avoid collisions. The second is the sheer volume of vessels transiting and mooring in the area. The port region has many miles of warehousing facilities and barge mooring on both banks of the river. The amount of activity occurring along the river banks and the number of vessels going up and down the river pose an increased risk of collisions because maneuvering room decreases. The third is changing river conditions. Because this region is a river environment, it is affected by seasonal changes (such as winter thaw), which can increase the water level and the speed of the river’s currents. With faster river currents, vessels must operate at higher speeds to maintain their maneuverability, thereby reducing their time to maneuver and increasing the potential risk of accidents. This condition is exacerbated by spring fog, which can significantly reduce visibility in the region. In 1995, about 41,600 vessels transited through the New Orleans area. Of this total, about 6,400 were deep-draft vessels, and the remainder were shallow-draft vessels. Cargoes carried by vessels transiting this area include iron and steel, metal ores and scrap, and fertilizers. However, according to a Coast Guard official, about half of the shallow-draft vessels carry dangerous cargoes, such as petroleum and petroleum products. The Coast Guard currently operates a limited vessel traffic management system in the New Orleans region. It is a radio-based vessel information system that uses red and green signal lights to direct vessel traffic. The scope of its operation depends in part on the river conditions. For example, when there are high water conditions (which may have been created by winter thaw), strong currents create a “boil” at a particular location in the river that is capable of turning a large vessel 180 degrees off course. Because of the added risk under this type of condition, the operators of the system limit the transits in this area to one vessel at a time to ensure that vessels have adequate maneuvering space to accommodate the effects of the river’s current as they try to correct their course. The Coast Guard’s plans for a VTS 2000 in this region include installation of two phases of a VTS 2000 system by 2001. The Coast Guard estimates that total facility and equipment costs for both phases would be about $29.7 million and total operating expenses would be about $6.6 million annually. Delaware Bay marks the entrance from the Atlantic Ocean that leads inland to ports such as Philadelphia, Pennsylvania; Wilmington, Delaware; and Camden, New Jersey. The bay itself is an expansion of the lower part of the Delaware River, and the bay’s entrance is about 10 miles wide between Cape May, New Jersey, and Cape Henlopen, Delaware. Deep-draft vessels entering Delaware Bay approach this entrance between the capes utilizing one of two sea lanes that approach the entrance from either the east or the south. Traffic separation schemes identify inbound and outbound lanes and a zone of separation in each of these sea lanes to help reduce the risk of collision in this area. Because parts of Delaware Bay are shallow, deep-draft vessels transit to their inland destinations via a channel that is 40 or more feet deep throughout its 90-mile length. The ports of Philadelphia and Camden, which lie opposite each other along the Delaware River, are about 87 miles from the capes, while Wilmington is about 63 miles from the capes. The navigational challenges that mariners face when transiting this region include curves with irregular depths; strong currents; shoals, particularly rock shoals in the Marcus Hook, Pennsylvania, region; occasional visibility limitations caused by fog, precipitation, smoke, and haze; and ice conditions in the winter. However, according to a Coast Guard official, the two significant navigational challenges in this region are at the approaches to the Delaware Bay entrance and at the location where the Chesapeake and Delaware Canal enters the Delaware River. In 1995, 2,570 deep-draft vessels arrived in this region. Pennsylvania terminals accounted for 51 percent of these arrivals, while terminals in New Jersey handled 31 percent and Delaware handled 18 percent. While many of these vessels carried a wide variety of products—ranging from fruit, cocoa, and salt to plywood, steel, and asphalt—about one-third of the vessels arriving in this region were carrying petroleum products. According to a port official, oil and oil-related products accounted for 85 percent of the total tonnage arriving in this port region in 1994. The Philadelphia Marine Exchange and the Pilot Association for the Delaware Bay and River jointly operate a vessel traffic information system for vessels operating in the Delaware Bay and River. The lower bay area is monitored via radio and radar by the pilots operating out of a watchtower at Cape Henlopen. The upper bay and rivers are monitored by radio via the Maritime Exchange. Vessel traffic is monitored 24 hours a day, 7 days a week, and operating costs for this service are funded through fees paid to the pilots for their piloting services. Unlike the Coast Guard’s VTS systems, vessels are not required by law to participate in this privately funded system, but according to a pilot official, all piloted vessels do participate. However, participation in the VTS system by shallow-draft vessels is mixed—according to a local Coast Guard official. The VTS system underwent a $1.2 million dollar upgrade in late 1995 that improved operators’ ability to monitor an anchorage area and provided for an expansion in their offshore coverage of vessels approaching Delaware Bay, according to a pilot official. The Coast Guard’s current plans for VTS 2000 include the installation of a system in this port by 2002. The Coast Guard currently estimates that acquisition, construction, and improvement costs will be $6.5 million and annual operating costs will be $1.3 million for the proposed system. The Port Arthur region consists of four major ports—Port Arthur, Beaumont, Orange, and Lake Charles—that together had about 2,400 deep-draft-vessel arrivals in 1994. Petroleum products and chemicals are the primary cargos for these areas. According to Coast Guard officials, navigation in Port Arthur is considered moderately difficult because vessels must transit up to 8 hours through a relatively narrow 56-mile channel and Sabine Lake with virtually no anchorages along the way. In contrast, navigation for Lake Charles involves a 25-mile transit for vessels from the Gulf of Mexico. Coast Guard officials said the transit is considered moderately easy because large vessels are restricted to one-way traffic, thereby eliminating the potential collision hazard between larger ships. Also, as a further precaution, vessels approaching a ship carrying liquified natural gas must maintain minimum distances from it (2 miles ahead or 1 mile behind). Neither Port Arthur nor Lake Charles has a radar-based VTS system. Instead, both areas have a radio-based scheduling system that provides certain marine traffic with information on vessel movements. Only deep-draft vessels with marine pilots aboard participate in this system; barges and other intercoastal waterway traffic do not usually communicate with the operations center with respect to their locations or other information. The Coast Guard plans to install and operate a VTS 2000 system in the Port Arthur/Lake Charles area by 2000. The Coast Guard estimates that facility and equipment costs to build the system will be about $6 million and annual operating costs will be about $1.3 million. The San Francisco Bay region comprises a series of connecting bays that make up the largest harbor on the Pacific Coast. Maritime traffic enters the area from the Pacific Ocean and can travel through a number of bays including San Francisco Bay, San Pablo Bay, and Suisun Bay. The bay traffic destinations include locations such as Oakland, Richmond, and San Francisco, while traffic transiting beyond the bays can travel about 37 or 43 miles upriver to the ports of Stockton and Sacramento, respectively. This region is considered a difficult navigation area because of its high-traffic density, frequent episodes of fog, and challenging navigational hazards. In 1994, there were 3,502 vessel arrivals in the San Francisco Bay region. Sixty-six percent of these vessels were either full container vessels or tank vessels carrying petroleum products. In addition to vessel arrivals, there is a high volume of ferry traffic in this region, adding to the navigational challenges for vessel operators traveling in the area. The episodes of fog, most frequently experienced in the summer, add to the difficulty of navigating by significantly reducing visibility. According to a Coast Guard official, this region’s large volume of vessel traffic and low visibility periods and the navigational hazards presented by narrow channels, shallow depths, prominent shoals, and crossing vessel traffic areas all contribute to the need for mariners transiting in this region to be subject to a number of regulations. One key regulation is a requirement that many of them participate in the VTS system. The Coast Guard established the VTS system in 1972 shortly after the passage of the Ports and Waterways Safety Act of 1972 and following a serious collision between two tank vessels that resulted in extreme environmental damage to San Francisco Bay. The Coast Guard continues to operate the VTS system today and monitors about 250 vessel movements per day. On average, just over two-thirds of these VTS system contacts are with ferries operating in the region. Participation is mandatory for all vessels meeting certain minimum requirements. For example, all power-driven vessels 40 meters or greater in length must participate in the system. Coast Guard personnel monitor approximately 133 miles of waterway, 24 hours per day, 7 days per week using radio, radar, and camera equipment. According to a Coast Guard official, the geographic area covered by VTS extends from about 38 nautical miles offshore into the central bay area and upriver toward the north and east to the ports of Stockton and Sacramento. Operating costs for the current VTS system are about $2.6 million annually. The VTS system is currently undergoing a $6.1 million upgrade that will provide two additional radar surveillance sites, two additional camera surveillance sites, and digitized radar displays in the Vessel Traffic Center. The upgraded system is expected to be fully operational in the summer of 1996. In 2004, the Coast Guard plans to replace the system again with a VTS 2000 system. The Coast Guard’s estimated costs for the VTS 2000 system are about $6.6 million in acquisition, construction, and improvement costs and about $2.2 million in annual operating costs. The Tampa Bay harbor is a relatively large, shallow body of water containing three major ports—Tampa, St. Petersburg, and Manatee. Maritime traffic, which included about 10,000 commercial vessel arrivals in 1994, enters the bay from the Gulf of Mexico. Vessels transit through dredged ship channels and take up to 6 hours to reach their destinations. A large portion of the vessels transiting the bay are tank vessels that annually carry more than 4 billion gallons of oil, petroleum products, and hazardous materials. In addition, Tampa Bay supports growing cruise ship and tourist industries, with current arrivals averaging three each week. According to Coast Guard officials, navigation in Tampa Bay is considered moderately difficult because of its high marine traffic density, the absence of inner-harbor anchorage areas, swift currents, and narrow channels. Reduced visibility caused by fog and severe thundershowers (which occur, on average, 24 and 91 days each year, respectively) also add to the challenges of navigating in this region. A major oil spill resulting from an accident in the bay in 1993 was the impetus for actions currently underway by state and local officials to develop their own VTS system for the bay area. The state of Florida has established a consortium of maritime interests to design and develop an interim system that will serve the area until the Coast Guard builds its own VTS there. The consortium is developing a proposal for a system that is compatible with the Coast Guard’s performance goals for VTS 2000. Under current plans, this privately operated system could be fully operational within the next several years, if funding to build and operate it can be obtained. Currently, the Coast Guard anticipates building and operating a VTS 2000 system in Tampa that would be fully operational by 2001. The Coast Guard estimates that facility and equipment costs to build the system will be $5.6 million and annual operating costs will be about $1.9 million. The Research and Special Programs Administration’s Volpe National Transportation Systems Center conducted the Port Needs Study from February 1990 through July 1991 at a cost of $2.8 million. The scope of the study involved an examination of the need for VTS systems at 23 locations. The study assessed the need for a VTS system by using two methods of cost-benefit analysis. The first method evaluated the full benefits and full costs of installing a VTS system without considering the costs and benefits of existing systems. Ten of the 17 ports under consideration for VTS 2000, however, currently have some form of VTS system or radio-based information system. The second method took these existing systems into account by evaluating their benefits and costs. On the basis of the second method, the study determined the marginal net benefit, if any, that a new system would bring to eight of the ten locations. Cost estimates for each port were based on initial investment costs and annual operation and maintenance costs. Investment costs were estimated by developing a “candidate” VTS system for each port zone. The candidate VTS system’s design is a preliminary engineering design made for developing cost estimates. For comparison purposes, initial investment costs were assumed to be committed in fiscal year 1993, and operation and maintenance costs are estimated from fiscal year 1996. All costs are in 1990 constant dollars. Benefit estimates for each port zone were based on the cost of vessel accidents and associated consequences expected to be prevented by the candidate VTS system. The estimates were based on a statistical analysis of historical vessel accidents and traffic and the unique navigational features of each port zone to determine the probability of vessel accidents occurring in each port zone. These probabilities were applied to vessel traffic forecasts to estimate the probable number of future vessel accidents that would occur in the absence of any VTS system. The effectiveness of the candidate VTS system in preventing vessel accidents in each port zone was then estimated as the cost of the losses expected to be avoided by the VTS systems. Benefits and costs were calculated over a 15-year period—1996-2010—and discounted to 1993. Starting in fiscal year 1993, Volpe issued a series of follow-on studies for the Coast Guard on selected sites. To date, reports on five of the ports considered for VTS 2000 have been completed. Reports were issued on Mobile and Corpus Christi in 1993, Boston and Tampa in 1994, and Philadelphia in 1995. Among other things, the follow-on studies supplement the Port Needs Study by validating and updating vessel traffic patterns and forecasts, documenting traffic management requirements, and updating the VTS cost-benefit analysis. Table II.1 gives the results of the follow-on studies. The results of the follow-on studies are not comparable with the Port Needs Study for several reasons. For example, the Port Needs Study used a discount rate of 10 percent in calculating costs and benefits, while the follow-on studies used a discount rate of 7 percent. Using a lower discount rate contributes to an increase in the present value of net benefits attributable to a VTS system. In addition, the follow-on studies used more current transit data or adjusted the original data based on input from the local marine community. For example, the follow-on study at Philadelphia/Delaware Bay used 1993 data from the Army Corps of Engineers, while the Port Needs Study was based on 1987 data from the Corps. Estimated life cycle costs and benefits over 15 years(7.1) This work was prepared at the request of the Chairman, Subcommittee on Coast Guard and Maritime Transportation, House Committee on Transportation and Infrastructure, and Representative James A. Traficant, Jr., who was formerly the ranking minority member of the Subcommittee. To assess the status of VTS 2000, we examined the Port Needs Study and updated studies on five ports and interviewed officials who were responsible for the study and updates at Volpe National Transportation Center. (App. II provided additional information on the Port Needs Study). We did not conduct an assessment of the accuracy of the data used in the Port Needs Study or the updates. We reviewed program documents and interviewed Coast Guard program managers and acquisition managers for the VTS 2000 program. To determine the interest of industry and ports in acquiring and funding VTS 2000 or other systems, we obtained information from four ports (New Orleans, Port Arthur/Lake Charles, Houston/Galveston, and Mobile) identified by the Port Needs Study as having the greatest benefit from a VTS system. In addition, we obtained information from four other ports that either have privately funded VTS systems or have expressed interest in funding VTS systems with nonfederal funds. At each port, we obtained information on implementation issues that arise or could arise in privately funded systems. Table III.1 categorizes these ports. New Orleans, La. Port Arthur, Tex./Lake Charles, La. Houston/Galveston, Tex. Mobile, Ala./Pascagoula, Miss. Los Angeles/Long Beach, Calif. Philadelphia, Pa./Delaware Bay, Del. San Francisco, Calif. Tampa, Fla. The information we obtained at each of the ports we visited was based on multiple data sources. Our work included a standard set of questions with stakeholders from industry, pilots’ associations, and port authorities and reviews of documents. We developed our list of interviewees from the Coast Guard’s Port or Safety Advisory Committee in each of the eight ports, or we based our list of interviewees on recommendations from the local Coast Guard office. The committee comprises key users of each port, such as pilots, ship and barge companies doing business at the port, and port officials. We verified with Coast Guard, industry, and port officials that our list of interviewees represented the key stakeholders that had an interest in operations of the port. (See table III.2 for a breakdown of key stakeholders interviewed in each location.) In addition, we reviewed documents on the VTS 2000 program and local correspondence with the Coast Guard. We also reviewed available documents on waterway safety. In addition to obtaining information from ports in the United States, we obtained information from six foreign countries to determine how they have implemented user fees. We judgmentally selected five foreign ports that charge port fees or user fees to fund VTS systems. The selected ports are Rotterdam, the Netherlands; Marseilles, France; Antwerp, Belgium; London, England; and Hong Kong. We also collected information from Canada because it is examining user fees as one means to pay for VTS systems in the future. Using a standard set of questions to obtain information, we conducted telephone interviews with central government officials and operational managers in these countries. These officials were identified to us by representatives of the International Association of Lighthouse Authorities and the European Commission as the most knowledgeable about VTS issues in their respective countries. We conducted legal analysis of pertinent laws and regulations governing the Coast Guard’s responsibilities in operating VTS systems and the implementation of user fees to pay for such systems. Among other things, we reviewed the Ports and Waterways Safety Act of 1972, as amended, and the Oil Pollution Act of 1990. Also, we interviewed the Coast Guard’s legal counsel on legal issues related to VTS 2000. We reviewed numerous budget and program documents. We also interviewed key stakeholders at the national level, including the American Waterways Operators, the American Association of Port Authorities, and the American Institute of Merchant Shipping. Also, we discussed our approach with the Marine Board of the National Research Council. Neil Asaba Gerald Dillingham Dawn Hoff David Hooper Luann Moy Mehrzad Nadji Elizabeth Reid Stan Stenersen Randy Williamson The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the Coast Guard's vessel traffic service (VTS) 2000 program, focusing on: (1) the status of the program; (2) the extent to which major stakeholders support the programs; (3) whether major stakeholders who do not support VTS 2000 acquisition and funding are interested in acquiring other VTS systems; and (4) the issues that could affect privately funded VTS systems. GAO found that: (1) VTS 2000 presents large-scale uncertainties as to the demand for the system or how much it will cost because the Coast Guard does not have adequate information on how many ports will operate VTS 2000; (2) VTS 2000 development plans have not reached the stage where specific components have been selected for ports; (3) at many proposed locations, the economic benefits of installing VTS 2000 are unclear; (4) VTS 2000 stakeholders stated that they have had no involvement with the program; (5) support for VTS 2000 is mixed because potential stakeholders believe that it will be too expensive for their ports and users would be unfairly targeted; (6) supporters of other VTS systems believed that alternate systems would be less expensive or existing systems were sufficient; (7) ports without VTS favor adding some form of VTS capability, but are reluctant to fund it; and (8) the privatization of VTS depends on the private sector's ability to fund the system, exposure to liability, and the Coast Guard's ability to oversee the transition. |
Tribal officials we interviewed for our January 2016 report said they place a high priority on institutional and personal Internet access because of the numerous benefits, including the following: Economic Development: Officials from most tribes said high-speed Internet is essential for economic development such as finding employment or establishing online businesses. FCC also found that community access to Internet services is critical in facilitating job placement, career advancement, and other uses that help to stimulate economic activity. Education: Officials from many tribes stated that high-speed Internet access at schools supports educational success. For example, access can allow students to conduct online testing or to watch online lectures. Health: About half of the tribes said that high-speed Internet access to support telemedicine was important to the tribe, particularly in rural or remote areas. Officials from all 21 tribes we interviewed said that Internet service existed on at least some of their lands at varying connection speeds, ranging from less than 1 Mbps to over 25 Mbps. Some of the tribes we interviewed had at least some fiber optic high-speed Internet connections while others had slower copper lines, only mobile service, or only satellite service. Many of the tribal lands where we held interviews had some level of mobile Internet service but only a few had 4G mobile high-speed Internet services. Others had no mobile service. Further, officials from about half of the tribes we interviewed described important limitations to their Internet services, including higher than usual costs, small data allocations, slow download speeds, and unreliable connections. In January 2016, we found that the barriers to improvements in high- speed Internet service on tribal lands are interrelated. The rugged terrain and rural location as well as tribal members’ limited ability to pay for high- speed Internet service were tribes’ and private providers’ most commonly cited impediments. Many tribal officials and all six providers we interviewed said these barriers can deter private investment in infrastructure needed to connect remote towns and villages to a service provider’s core network—known as the middle-mile. Middle-mile infrastructure may include burying fiber optic or copper cables, stringing cable on existing poles, or erecting towers for wireless microwave links, which relay wireless Internet connections from tower to tower through radio spectrum. Tribal lands, located far from urban areas, may not have the middle-mile infrastructure necessary for providers to deploy high- speed Internet. Tribal officials and providers we interviewed also cited limited financial resources as a barrier to high-speed Internet access. Of the 21 tribes we interviewed, many reported poverty and affordability as drivers of low subscribership to existing Internet services or as a barrier to broadening the availability of services. Poverty rates among the tribes we interviewed varied, but many were well above the 2014 national average of 15.5 percent. Two of the providers we interviewed discussed non-payment among tribal households as a disincentive to Internet service provision. One provider said that the customers it serves on tribal lands had non- payment rates double that of other customer groups, and that these rates often follow seasonal employment patterns. About half of the tribes we interviewed told us that a lack of tribal members with sufficient bureaucratic and technical expertise was a common barrier to increasing high-speed Internet access on tribal lands. Tribal officials said that tribal members do not always have the bureaucratic expertise required to apply for federal funds, which can lead to mistakes or the need to hire consultants. A lack of technical expertise also affects tribes’ ability to interact with private-sector Internet providers. For the seven tribes we interviewed that either had a tribally-owned provider or were in the process of establishing one, three of them said that the lack of expertise in the tribe was a challenge to establishing a tribally-owned telecommunications provider for high-speed Internet deployment. To address this, in the early 2000s, FCC held a number of Indian telecommunications initiatives, regional workshops, and roundtables. In fiscal year 2012, the FCC’s Office of Native Affairs and Policy consulted with about 200 tribal nations, many during six separate one- to three-day telecommunications training and consultation sessions on tribal lands. These included the Native Learning Labs, where attendees could, for example, learn about data the FCC has available on spectrum licensing and USF programs, among other things. The Office held seven training workshops in fiscal years 2014 and 2015, and plans to offer more in fiscal year 2016. The goal of this new series of sessions is to provide tribal officials with information about funding opportunities and policy changes with respect to high-speed Internet, USF programs, and spectrum issues. In January 2016, we found that FCC and USDA implement mutually supportive, interrelated high-speed Internet access programs that offer funding to either tribal entities or service providers to achieve the goal of increased access. Tribal officials we interviewed said that both FCC’s and USDA’s programs were important for the expansion of high-speed Internet service on their lands. Tribes sometimes qualify for benefits from more than one of these programs, either directly or through private-sector Internet providers. Eligibility requirements are based on the need of an area as well as deployment requirements. Table 1 identifies three universal service programs that subsidize telecommunications carriers and services to areas that include tribal lands and two RUS grant programs. While FCC and USDA programs that promote high-speed Internet access on tribal lands are interrelated, we found that they are not always well coordinated. Our body of work has shown that interagency coordination can help agencies with interrelated programs ensure efficient use of resources and effective programs. Agencies can enhance and sustain their coordinated efforts by engaging in key practices, such as establishing compatible policies and procedures through official agreements. Agencies can also develop means to operate across agency boundaries, including leveraging resources across agencies for joint activities such as training and outreach. One area lacking coordination between FCC and USDA is their outreach and technical assistance efforts. FCC and USDA independently conduct outreach and training efforts for related programs promoting Internet access. For example, while FCC officials said they invite USDA officials to FCC training workshops and are sometimes invited to USDA training workshops, they said that they do not coordinate to develop joint outreach or training events. Synchronizing these activities could be a resource- saving mechanism, which could result in a more efficient use of limited federal resources, an opportunity for resource leveraging between the two agencies and a cost-savings to the tribes attending training events. For example, USDA held a training event in Washington State in fiscal year 2015 and FCC hosted a training event in Oregon the same year. The two agencies could have planned a joint training event in the Pacific Northwest Region—each contributing to the cost of the event—that would have reduced the cost burdens for tribes. Tribal members with limited budgets would not have had to travel twice or choose between the two training events. Better coordination on conferences, as feasible, could help FCC and USDA reach a broader audience and increase the value of their outreach to tribes. To this end, we recommended in January 2016 that FCC develop joint outreach and training efforts with USDA whenever feasible to help improve Internet availability and adoption on tribal lands. FCC concurred with our recommendation and summarized the areas in which it coordinates with USDA and said that it will continue to work with USDA to ensure more strategic and routine coordination. For example, FCC invited USDA officials to participate in all tribal consultation and training events planned for 2016. FCC defines Internet availability as the presence of Internet service in an area, and Internet adoption as the number of people in the area subscribing to Internet service. In 2006, we found that data on the rate of availability and adoption of Internet on tribal lands was unknown because no federal survey had been designed to capture this information. We recommended that additional data be identified to help assess progress towards providing access to telecommunications, including high-speed Internet, for Native Americans living on tribal lands. Since then, as discussed in our January 2016 report, the federal government has started collecting data on Internet availability and adoption. However, as of December 2015, FCC has not identified the performance goals and measures it intends to achieve for broadband availability or adoption on tribal lands. In 2011, The National Telecommunications and Information Administration (NTIA), in cooperation with FCC and the states, began publishing the National Broadband Map, an interactive website that allows users to view information on high-speed Internet availability across the United States, including on tribal lands. The data to support the National Broadband Map is collected from service providers, including those offering service to federally recognized Indian tribes, including Alaska Native villages. The National Broadband Map website provides data on Internet availability on approximately 318 federal Indian reservations and associated trust lands, including upload and download speeds for both wireline and wireless service, technology for Internet delivery, and the number of Internet service providers. While the National Broadband Map provides information about high- speed Internet availability, according to NTIA officials, the map is based on Census blocks. If a service provider reported any availability of high- speed Internet in a Census block, the entire block was counted as served. This could create misrepresentations of service in rural areas, which generally constitute large Census blocks. Because much of tribal land is rural, the reported broadband service is shown to be greater than the actual service available on tribal lands, according to NTIA officials. Some tribal officials agreed that certain areas on the Broadband Map were inaccurate. For example, the map showed the Lac du Flambeau reservation in Wisconsin as covered because two providers reported that they provide Internet service on the reservation. However, according to tribal officials, the National Broadband Map exaggerated the level of service on their reservation making them unable to compete for some USF and RUS programs despite their efforts to document coverage problems to correct the map. One provider indicated that in rural areas, it is more difficult to get accurate data because in some cases addresses are not used, making it difficult to link service to a census block. However, in the future, this provider indicated that it planned to utilize GPS information to provide more accurate data. Five of the six providers we interviewed said that the reliability of the National Broadband Map has improved over time. In 2008, Congress passed the Broadband Data Improvement Act, which required the Bureau of the Census to collect information from residential households, including those on tribal lands. Census captured three aspects of Internet adoption: 1) whether a computer is owned or used at the residence, 2) if the household subscribes to Internet service, and if so, 3) whether that service is dial-up or a high-speed connection. Census began collecting the required data on Internet adoption beginning with the 2013 American Community Survey (ACS). According to Census officials, five years of ACS data must be collected to provide data for areas with smaller populations. Census officials said that this data will be available in late 2018 and will provide an estimate for Internet adoption nationwide, including the first estimates for hard to reach populations such as Native Americans. Agency performance measurement is the ongoing monitoring and reporting of program accomplishments, particularly towards pre- established goals. Performance measurement allows organizations to track progress in achieving their goals and provides information to identify gaps in program performance and plan any needed improvements. The GPRA Modernization Act of 2010 requires annual performance plans to include performance measures to show the progress the agency is making in achieving its goals. Further, we have identified best practices in articulating goals that include: showing baseline and trend data for past performance, and identifying projected target levels for performance for multi-year goals. Making high-speed Internet, including broadband Internet, available to all Americans is FCC’s stated long-term objective, but we found in January 2016 that FCC has not set goals to demonstrate or measure progress toward achieving it. The National Broadband Map is currently the best tool for setting goals and measuring progress toward increasing the availability of high-speed Internet on tribal lands. Map data are widely used by FCC to describe the availability of broadband nationwide. For example, FCC uses data gathered for the National Broadband Map in its annual Broadband Progress report provided to Congress as required by the Telecommunications Act of 1996. To improve performance management, we recommended in our January 2016 report that FCC develop performance goals and measures using, for example, data from the National Broadband Map, to track progress on achieving its strategic goal of making broadband internet available to households on tribal lands, and FCC agreed with our recommendation. Although Census is gathering baseline information on household Internet adoption, and the National Broadband Map provides data on high-speed Internet availability across the country, we found that FCC lacks the specific information it needs to measure the outcomes of its E-rate program at tribal schools and libraries. The E-rate program provides assistance to schools, school districts, and libraries to obtain telecommunications technology, including high-speed Internet. E-rate does not specifically target tribal schools and libraries, although some are eligible and receive benefits. Since 2010, E-rate has committed more than $13 billion in service provider customer fees to schools and libraries, and according to data provided by FCC, at least $1 billion of that amount supports tribal institutions. FCC’s E-rate program has a stated goal of ensuring that all schools and libraries have affordable access to modern broadband technologies. Communicating what an agency intends to achieve and its programs for doing so are fundamental aims of performance management and required under the GPRA Modernization Act of 2010. Specifically the act requires an agency to have measurable, quantifiable, outcome-oriented goals for major functions and operations, an annual performance plan consistent with FCC’s strategic plan and a means to communicate the outcomes of its efforts. However, FCC has not set any quantifiable goals and performance measures for its E-rate efforts to extend high-speed Internet in schools and libraries nationwide or on tribal lands. According to federal internal control standards, government managers should ensure there are adequate means of obtaining information from external stakeholders that may have a significant impact on the agency meeting its goals. To that end, FCC collects information on E-rate recipients nationwide through questions on its application for E-rate assistance. Several different types of institutions on tribal lands can qualify for E-rate funding, including schools operated by the tribe or Bureau of Indian Education, private schools operating on a reservation, and public school districts that serve the reservation. On FCC’s E-rate application, applicants receiving service may self-identify as tribal, but in this instance, the application provides no definition of “tribal.” We found that not all schools and libraries on tribal lands identify themselves as such during the application process. FCC provided us with information on E-rate recipients between 2010 and 2014 that self-identified as tribal, and the amounts committed to those recipients. These data may understate the amount of funds supporting schools on tribal lands. Specifically, we identified more than 60 additional school districts, private schools, and public libraries on the lands of the 21 tribes we studied that received E- rate assistance but were not included in FCC’s information on tribal recipients. Consequently, FCC does not have accurate information on the number of federally recognized tribes, including Alaska Native villages, receiving E-rate support, or the amount being provided to them. Without more precise information and direction from FCC, the extent to which E- rate assistance is provided to tribal institutions cannot be reliably determined, nor can FCC rely on the information to develop quantifiable goals and performance measures for improving high-speed Internet access in tribal schools or libraries. It is important to understand how these programs affect tribal institutions because FCC has made improving high-speed Internet access in tribal institutions a priority following the National Broadband Plan, with the establishment of the Office of Native Affairs and Policy in 2010, and its current Strategic Plan. To address these concerns, in January 2016, we recommended that FCC: improve the reliability of data related to institutions receiving E-rate funding by defining “tribal” on the program application. FCC agreed with our recommendation and intends to provide guidance to applicants in fiscal year 2017. develop performance goals and measures to track progress on achieving its strategic objective of ensuring that all tribal schools and libraries have affordable access to modern broadband technologies. FCC also agreed with this recommendation, indicating that goals and performance measures, among other things, will substantially improve the accessibility of modern broadband technologies for tribal schools and libraries. Chairman Barrasso, Ranking Member Tester, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony or the related report, please contact Mark Goldstein, at (202) 512-6670 or GoldsteinM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony include Keith Cunningham, Assistant Director; Christopher Jones; Sarah Jones; Cheryl Peterson; Carl Ramirez; Cynthia Saunders; and Michelle Weathers. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | High-speed Internet service is viewed as a critical component of the nation's infrastructure and an economic driver, particularly to remote tribal communities. This testimony examines: (1) perspectives of tribes and providers on high-speed Internet access and barriers to increasing this access; (2) the level of interrelation and coordination between federal programs that promote high-speed Internet access on tribal lands; and (3) existing data and performance measures related to high-speed Internet on tribal lands. This statement is based on GAO's January 2016 report (GAO-16-222). For this report, GAO visited or interviewed officials from a non-generalizable sample of 21 tribal entities and 6 service providers. GAO also reviewed FCC and USDA fiscal year 2010 through 2014 program data, funding, and materials and interviewed federal officials. In January 2016, GAO found that, although all 21 tribes GAO interviewed have some access to high-speed Internet, barriers to increasing access remain. Tribal officials and Internet providers said that high poverty rates among tribes and the high costs of connecting remote tribal villages to core Internet networks limit high-speed Internet availability and access. About half of the tribes GAO interviewed also said that the lack of sufficient administrative and technical expertise among tribal members limits their efforts to increase high-speed Internet access. The Federal Communications Commission's (FCC) Universal Service Fund subsidy programs and the U.S. Department of Agriculture's (USDA) Rural Utilities Service grant programs are interrelated. The programs seek to increase high-speed Internet access in underserved areas, including tribal lands. GAO's previous work on overlap, duplication, and fragmentation has shown that interagency coordination on interrelated programs can help ensure efficient use of resources and effective programs. However, FCC and USDA do not coordinate to develop joint outreach and training, which could result in inefficient use of federal resources and missed opportunities for resource leveraging. For example, USDA and FCC held separate training events in the Pacific Northwest Region in 2015 when a joint event could have saved limited training funds and reduced costs. FCC has placed special emphasis on improving Internet access on tribal lands following the issuance of the National Broadband Plan in 2010, which called for greater efforts to make broadband available on tribal lands. However, FCC has not developed performance goals and measures for improving high-speed Internet availability to households on tribal lands. FCC could establish baseline measures to track its progress by using, for example, the National Broadband Map which includes data on Internet availability on tribal lands. FCC also lacks both reliable data on high-speed Internet access and performance goals and measures for high-speed Internet access by tribal institutions—such as schools and libraries. Specifically, FCC's E-rate program provides funds to ensure that schools and libraries have affordable access to modern broadband technologies, but FCC has neither defined “tribal” on its E-rate application nor set any performance goals for the program's impact on tribal institutions. Without these goals and measures FCC cannot assess the impact of its efforts. In January 2016, GAO recommended that FCC take the following actions in tribal areas: (1) develop joint training and outreach with USDA; (2) develop performance goals and measures for improving broadband availability to households; (3) develop performance goals and measures for improving broadband availability to schools and libraries; and (4) improve the reliability of FCC data related to institutions that receive E-rate funding by defining “tribal” on the program application. FCC agreed with the recommendations. |
Intellectual property has become a critical component of our country’s economy, accounting for an average of 18 percent of the U.S. gross domestic product from 1998 through 2003. Industries that rely on IP protection—including the aerospace, automotive, computer, pharmaceutical, semiconductor, motion picture, and recording industries—are estimated to have accounted for 26 percent of the annual real gross domestic product growth rate during this period and about 40 percent of U.S. exports of goods and services in 2003 through 2004. Further, they are among the highest-paying employers in the country, representing an estimated 18 million workers or 13 percent of the labor force. The economic value of IP-protected goods makes them attractive targets for criminal networks. Criminal activities have negative effects for U.S. innovation and investment, the value and reputation of individual companies, and consumers who are put at risk by substandard or dangerous products. Such activity is inherently difficult to measure, but the Organization for Economic Cooperation and Development recently estimated that international trade in counterfeit and pirated products in 2005 could have been up to $200 billion. According to industry groups, a broad range of IP-protected products are subject to being counterfeited or pirated, from luxury goods and brand name apparel to computer software and digital media to food and medicine. Evidence of counterfeiting in industries whose products have a public health or safety component, such as auto and airline parts; electrical, health, and beauty products; batteries; pharmaceuticals; and infant formula, presents a significant concern. The World Health Organization estimates that as much as 10 percent of medicines sold worldwide are believed to be counterfeit, including essential medicines such as vaccines, antimalarials, and human immunodeficiency virus therapies. The federal government plays a key role in granting protection for and enforcing IP rights. It grants protection by approving patents or registering copyrights and trademarks. It enforces IP rights by taking actions against those accused of theft or misuse. Enforcement actions include both civil and criminal penalties. U.S. laws criminalize certain types of IP violations, primarily copyright and trademark violations, and authorize incarceration or fines. These laws are directed primarily toward those who knowingly produce and distribute IP-infringing goods, rather than those who consume such goods. Although U.S. laws do not treat patent violations as a crime, the federal government does take actions to protect patents and authorizes civil enforcement actions against infringers. See appendix II for a detailed list of the U.S. laws that grant IP protection and the criminal and civil penalties that federal law enforcement agencies are authorized to impose. Protection is also provided by the U.S. International Trade Commission, which investigates allegations of unfair import practices that commonly involve claims of patent or trademark infringement. For example, in January 2007, the commission issued an “exclusion order” to cease importation of certain types of laminated floor panels that it found infringed on three U.S. patents. Exclusion orders direct CBP to stop certain goods from entering the United States while the order is in effect. The commission is also authorized to take other actions, such as issuing “cease and desist” orders to those engaging in unfair import practices or assessing civil penalties. Congress has supported several interagency mechanisms to coordinate federal IP law enforcement efforts. In 1999, Congress created the interagency NIPLECC as a mechanism to coordinate U.S. law enforcement efforts to protect and enforce IP rights in the United States and abroad. Officials from seven federal entities are members of NIPLECC. A presidential initiative, called the Strategy Targeting Organized Piracy (STOP), is the council’s strategy, and it articulates five broad goals. From 2001, Congress supported the creation of the National Intellectual Property Rights Coordination Center, another interagency mechanism that aims to improve federal IP enforcement and coordinate investigative efforts between ICE and FBI (discussed in detail later in this report). For the five key federal agencies with IP enforcement roles, such enforcement is not a top priority for most of them, and determining their resource allocations to IP enforcement is challenging. These agencies’ IP enforcement functions include: (1) seizing IP infringing goods; (2) conducting investigations; and (3) prosecuting alleged violations. The overall aim of U.S. government efforts is to stop trade in counterfeit and pirated goods, and the three functions each present some degree of deterrent. The key law enforcement agencies—CBP, ICE, FBI, and DOJ— have broad missions with many competing responsibilities, and their IP enforcement role is not generally their highest priority, while FDA’s primary mission is to protect public health. We were not able to identify the total resources allocated to IP enforcement across the agencies because few staff are dedicated solely to IP enforcement, and only certain agencies track the time spent on IP criminal investigations by non- dedicated staff who carry out this function. The information we were able to compile shows declines in IP enforcement resources in several agencies, and fluctuating or growing resource allocations to IP enforcement in others. Because federal IP enforcement roles are interdependent—seizures may launch or contribute to investigations, and investigations may lead to prosecutions—the emphasis placed on enforcement of IP at one agency or field office can impact the IP enforcement efforts of others. Key federal agencies carry out three IP enforcement functions. Seizing IP infringing goods is primarily performed by CBP. IP-related investigations are performed by agencies located in three different departments. Prosecuting IP crimes is carried out by two different entities within DOJ. Figure 1 identifies the IP enforcement functions and the structure, including the departments and agencies, in which they are performed. The four key federal law enforcement agencies and FDA have broad missions and many responsibilities, and IP enforcement is not a top priority at most agencies. CBP and ICE address IP enforcement as part of their legacy efforts to combat commercial fraud, but their top mission is securing the homeland. DOJ identifies IP enforcement as one of its top priorities, but FBI does not. FDA’s role is driven by its public health and safety mission, not IP enforcement per se. Regardless of the priority ranking agencies assign to IP enforcement, within their IP enforcement efforts, they have all given priority to IP-related crimes that pose risks to public health and safety. Staff in agency headquarters play a role in setting IP enforcement policies and, at some agencies, carry out certain IP enforcement actions, but most enforcement activity takes place at the field office level. Each field office faces a unique set of challenges in its local environment, balancing IP enforcement efforts with other agency priorities. Several companies and associations we interviewed remarked that the federal IP enforcement structure is not clear. For example, one association remarked that agency responsibilities are unclear and may overlap, while another said that there is no formal process for referring cases for federal action. This structure was seen as especially challenging for small companies who need federal assistance but lack the resources or expertise to navigate the federal system. Additional information on private sector views about federal IP enforcement is contained in appendix III. Information is presented below on each agency’s IP enforcement function, the priority assigned to IP enforcement, and the structure within which such enforcement is carried out. Function: CBP is the primary federal agency authorized to seize goods, including IP-infringing goods, upon their arrival in the United States. CBP is also responsible for preventing the entry of goods into the United States that are subject to exclusion orders and assesses penalties against IP infringers when warranted. Priorities: CBP’s primary mission is to protect the homeland. CBP is also responsible for carrying out its legacy Customs functions, including trade enforcement. CBP has identified six Priority Trade Issues, one of which is IP enforcement. Within its IP enforcement efforts, CBP gives priority to large value seizures and violations that affect public health and safety or economic security or that have ties to terrorist activity. Structure: CBP’s Office of International Trade develops IP enforcement policies and plans, develops national instructions for targeting shipments suspected of carrying IP-infringing goods, writes guidance for assessing penalties and enforcing exclusion orders, and maintains data on IP-related seizures. The Office of Field Operations oversees implementation of these policies and procedures at 325 U.S. ports of entry. While much of CBP’s IP enforcement activity is carried out by the ports, headquarters staff play an integral role in supporting those efforts, including providing policy and guidance on enforcement priorities and developing systems and technologies to enhance enforcement. Function: ICE conducts investigations of IP-related criminal activity, including infringement of trademark and copyright law. Priorities: ICE’s primary mission is to protect the homeland. It is also responsible for combating commercial fraud, which includes IP enforcement. ICE’s interim agency-wide strategic plan and its plan for commercial fraud are law enforcement sensitive and not available to the public. However, according to ICE officials, the top priorities within commercial fraud enforcement are public health and safety violations and IP infringement. Structure: Within ICE’s Office of Investigations, the Critical Infrastructure and Fraud Division develops the agency’s IP policies and oversees its IP enforcement efforts. The division’s IP responsibilities are handled by the Branch for Commercial Fraud and Intellectual Property Rights, which also houses the National Intellectual Property Rights Coordination Center. Although the center is officially an interagency coordination body, it plays a lead role in developing and carrying out ICE’s IP enforcement policies. In addition, ICE has a Cyber Crimes Center that focuses on Internet-based crimes, including IP piracy, and provides referrals and investigative assistance to ICE’s field offices. IP investigations are carried out by agents located in about 100 U.S. cities, organized under ICE’s 26 field offices. Function: FBI conducts investigations of IP-related criminal activity, including infringement of trademark and copyright law, as well as theft of trade secrets. Priorities: The FBI’s principal mission is to investigate criminal activity and defend the security of the United States. It has identified 10 priority enforcement areas, including cyber crime. IP enforcement is included in the cyber crime area, but it is ranked 5th out of FBI’s 6 cyber crime priorities. Within its IP enforcement efforts, FBI’s priorities are, in order, trade secret theft, copyright infringement, trademark infringement, and signal theft, and one of FBI’s IP enforcement goals is for its field offices to initiate IP investigations that affect public health and safety. Structure: FBI’s Cyber Division oversees the agency’s IP enforcement efforts even though not all of its IP investigations are cyber-related. A single unit within the Cyber Division, called the Cyber Crime Fraud Unit, has operational and management oversight for all of FBI’s cyber crime activities. IP-related investigations are primarily carried out in FBI’s 56 field offices. Function: FDA investigates illegal activity pertaining to food, drugs, medical devices, and other products because of the impact on public health. Priorities: FDA’s primary mission is to protect public health by assuring the safety, efficacy, and security of human and veterinary drugs, the food supply, medical devices, and other products. IP enforcement is not part of FDA’s mission or its enforcement priorities; however, FDA carries out IP- related enforcement actions in fulfilling its mission to protect public health and safety, such as investigating criminals that traffic in counterfeit pharmaceuticals. Structure: FDA’s Office of Regulatory Affairs, in collaboration with other agency components, carries out the agency’s enforcement activities. This office houses, among other entities, FDA’s Office of Criminal Investigations and the Division of Import Operations. The Office of Criminal Investigations, with six field offices and presence in 25 U.S. cities, has the primary responsibility for all criminal investigations conducted by the FDA. The Division of Import Operations provides guidance on the agency’s import policy to FDA field staff, including at numerous ports around the country. FDA field staff that discover suspected counterfeit imports of products that are regulated by FDA would refer these to the Office of Criminal Investigations for further action. In addition, Office of Regulatory Affairs laboratories play a role by analyzing samples of suspected counterfeit products. Function: DOJ prosecutes IP cases referred from ICE, FBI, and FDA, as well as from private sector representatives and other sources. Priorities: According to DOJ officials and documents, IP enforcement is one of the department’s highest priorities. In March 2004, the Attorney General announced the creation of a DOJ Task Force on Intellectual Property, with a mission of identifying ways to strengthen the department’s IP enforcement efforts. The Task Force produced 31 recommendations for improving IP enforcement and provided a progress report on those recommendations in its 2006 report. The Task Force made numerous short- and long-term recommendations, including increasing the number of DOJ prosecutors and FBI agents that focus on computer crime and IP cases and prosecuting IP cases involving a threat to public health and safety. In addition, DOJ developed an internal IP enforcement strategy for 2007 with six strategic objectives designed to help it meets its larger goal of reducing IP theft. DOJ shared this document with us, but its contents are for official government use only. Structure: DOJ’s IP enforcement is carried out primarily by the 94 U.S. Attorney’s Offices located throughout the country as well as its Criminal Division’s Computer Crime and Intellectual Property Section (CCIPS). Under DOJ’s Computer Hacking and Intellectual Property (CHIP) program, each U.S. Attorney’s Office has one CHIP coordinator who is trained in prosecuting IP enforcement cases. In addition, 25 U.S Attorney’s Offices have CHIP units, usually comprised of 2 or more attorneys (a few units have as many as 8 attorneys), who focus solely on prosecuting computer hacking or IP crimes. IP crimes prosecuted by the U.S. Attorney’s Office are not limited to CHIP units, but may be prosecuted as part of a larger case, such as one involving organized crime. CCIPS, located in DOJ headquarters, is responsible for supporting IP prosecutions by U.S. Attorney’s Offices, as well as prosecuting their own cases. CCIPS is also responsible for developing DOJ’s overall IP enforcement strategy and coordinating among U.S. and foreign law enforcement officials on domestic and international cases of IP theft. Determining the total resources that agencies have allocated to IP enforcement is challenging because agencies have few staff exclusively dedicated to IP enforcement, and only the agencies that conduct criminal investigations estimated time spent on this activity. Most agencies have some headquarters staff exclusively dedicated to IP enforcement. However, staff in the field, where most IP enforcement activity occurs, are generally not dedicated exclusively to IP enforcement. The information we were able to compile shows declines in IP enforcement resources in some agencies and fluctuating or growing resource allocations to IP enforcement in others. Agencies’ ability to allocate staff to IP enforcement is affected by not only the priority they assign to this function but also their overall resource situation. Some agencies have faced resource challenges in recent years. Private sector representatives we interviewed across various sectors expressed concern about the federal government’s ability to carry out IP enforcement due, in part, to a lack of resources. While several companies said that federal IP enforcement efforts have increased, 14, or nearly half, of the representatives we contacted said there is a shortage of resources to carry out IP enforcement. Appendix III provides further detail on private sector views. Information on each agency’s resources for IP-related enforcement are detailed below. Various types of CBP staff play a role in IP enforcement. The only staff that are dedicated exclusively to IP enforcement are international trade specialists, attorneys, and paralegals assigned to the Office of International Trade, and their numbers have fluctuated over time. International Trade Specialists are responsible for performing nationwide targeting for all CBP ports of incoming shipments suspected of carrying IP-infringing goods and for analyzing IP seizure data. The number of international trade specialists remained relatively flat from fiscal year 2003 through 2006, at about 11, before increasing to 17 in 2007. However, the number of these specialists that were performing targeting in 2003 through 2006 actually declined. Attorneys are responsible for advising ports on how to carry out CBP’s IP enforcement authorities and have sole responsibility for developing exclusion order enforcement guidance, a highly complex and labor intensive task. The number of attorneys devoted to IP enforcement declined from 11 in 2003 to 9 in 2006 and remains at that level. Other CBP staff perform IP enforcement activities, but are not exclusively dedicated to it; CBP does not track the amount of time these staff spend on IP enforcement. In addition, within the Office of International Trade, CBP auditors perform targeted audits on selected companies to assess their internal controls for preventing the importation of IP-infringing goods. CBP does track hours spent on IP audits. As of December 2007, CBP reported that slight over 14 “man years” have been charged to IP audits since fiscal year 2005, when such audits were initiated. CBP staff that carry out the agency’s IP enforcement activities operate in an environment that is plagued by staffing challenges, including staffing shortages, difficulty hiring and retaining staff, and fatigue among its workforce. For example, in November 2007, we reported that CBP estimates it may need several thousand more CBP officers to operate its ports of entry. In April 2007, we also reported that staff resources at CBP for customs revenue functions have declined since the formation of DHS. Among the agencies that conduct criminal investigations, only ICE has staff dedicated exclusively to IP enforcement. These include ICE staff assigned to the National Intellectual Property Rights Coordination Center and a commercial fraud team in one of its field offices that focuses solely on IP enforcement. As discussed later in this report, the number of ICE staff assigned to the center declined from 15 in 2004 to 8 in 2007. Neither FBI nor FDA have any staff dedicated exclusively to IP enforcement. A senior FBI Cyber Division official said the size of FBI’s IP enforcement effort is small relative to other FBI efforts and has limited resources. However, ICE, FBI, and FDA all track the amount of time that their investigators spend on IP-related investigations (see fig. 2). By converting ICE and FDA investigative hours to full-time-equivalent (FTE) positions, and using a similar measure (average on board) for FBI, we determined that ICE spent an average of 154 FTEs on IP enforcement during 2001 through 2006, while FBI averaged 53 agents on board for IP enforcement, and FDA spent an average of 16 FTEs. ICE investigative resources spent on IP enforcement increased from 2001 to 2003 before falling off, while the estimated number of investigator FTEs spent on IP cases at FBI and FDA experienced little change over the 6-year period. DOJ dedicates staff to IP enforcement in headquarters and within its U.S. Attorney’s Offices. The number of staff dedicated to IP enforcement has grown in recent years. For example, DOJ’s CHIP units, first created in February 2000, grew from 13 units as of 2002 to 25 units as of 2007. Most of the CHIP units have approximately two or more attorneys who focus on prosecuting IP and high-technology crimes, with as many as eight in at least one of the units. As the number of units has grown, so has the number of attorneys assigned to working IP cases. As of July 2007, DOJ had 101 Assistant U.S. Attorneys assigned to CHIP units. Another 122 Assistant U.S. Attorneys have been specially trained to prosecute computer crime and IP offenses, with at least one such CHIP prosecutor located in every U.S. Attorney’s Office. DOJ began tracking the time attorneys spend on IP enforcement in May 2006, but we did not collect this data. In addition, according to DOJ, it had 14 attorneys working on IP enforcement in its CCIPS. Despite having these dedicated and trained staff, however, officials from the U.S. Attorney’s Offices we visited noted that, over the past few years, their offices have experienced high turnover and have been generally understaffed, with vacant positions left unfilled. Given the interdependent nature of federal IP enforcement and the central role played by the field offices, the emphasis placed on IP enforcement at one location can affect the IP enforcement efforts of others. For example, investigative agency officials at some locations we visited said that their decisions about beginning or continuing an IP-related investigation were influenced by the willingness of the local U.S. Attorney’s Office to prosecute the case. Some field office officials we interviewed stated that local U.S. Attorney’s Offices set minimum value thresholds for taking IP cases, in part because the U.S. Attorney’s Offices also have limited resources. However, officials at the U.S. Attorney’s Offices we visited said that they did not have specific thresholds for IP prosecutions, particularly when it comes to public health and safety, and that they evaluate cases on their individual merits. Similarly, the degree to which an ICE field office can accept and work on IP enforcement referrals from CBP may depend on the field office’s other priorities, such as money laundering or smuggling enforcement. Officials at most of the agencies noted other factors that influence their IP-related enforcement decisions, including the number or value of items seized, the health or safety impacts of the crime, and the organizational structure of the entities involved. Federal IP enforcement activity generally increased from fiscal year 2001 through 2006; however, most agencies have not taken key steps to assess their achievements. Specifically, most agencies have not: (1) conducted systematic analyses of their IP enforcement data to inform management and resource allocation decisions, (2) clearly identified which of their efforts relate to a key IP enforcement area—IP crimes that affect public health and safety—nor collected data to track these efforts, and (3) established performance measures or targets to assess their achievements and report to Congress and others. Our review of agency statistics for fiscal years 2001 through 2006 indicated that IP enforcement actions generally increased over the period, with some fluctuations in activity. The number of CBP seizure actions and the value of such seizures has increased significantly. Investigative agencies’ enforcement outcomes—arrests, indictments, and convictions—also increased during the time period. The number of DOJ prosecutions hovered around 150 cases per year during fiscal years 2001 to 2005 before increasing to about 200 cases in fiscal year 2006, with the number of defendants charged with IP crimes fluctuating. CBP’s primary IP enforcement efforts involve seizing IP-infringing goods that individuals attempt to import through U.S. ports of entry. In April 2007, we reported that the total number of CBP’s seizure actions has grown since fiscal year 2001, nearly doubling from fiscal years 2005 to 2006; however, most of these actions involved numerous small-value seizures made from air-based modes of transport while significantly fewer seizure actions have been made from sea- or land-based modes of transport. We reported in 2007 that CBP officials said they believed the trend reflects growing Internet sales and the ability of manufactures to directly ship their merchandise to consumers through mail and express consignment. At that time, some CBP officials stated that this trend may reflect a shift in smuggling techniques toward the use of multiple small packages rather than large shipments in cargo containers, possibly to reduce the chance of detection. See figure 3 for trends in the number of CBP seizure actions and estimated domestic values. After CBP seizes the counterfeit goods, it may also assess penalties that result in monetary fines imposed against the violator. CBP officials reported that processing penalty cases is resource-intensive, but noted that few penalties are collected and such enforcement has little deterrent effect. We found that less than 1 percent of the penalty amounts assessed for IP violations in each fiscal year were collected. See table 1 for IP- related penalties assessed and collected in each fiscal year from 2001 through 2006. Various factors contribute to CBP’s limited collection rates on IP penalties, including petitions for mitigation or dismissal by the violator, dismissal due to criminal prosecutions, and the nature of counterfeit importation. CBP does not maintain statistics on all of its exclusion order activities, but available information indicates that its exclusion activities have declined, in part due to procedural weaknesses. While the U.S. International Trade Commission issues relatively few exclusion orders each year, these orders can affect large volumes of trade, according to CBP officials. As of July 2007, 66 exclusion orders were in effect, according to CBP. CBP takes two basic steps to enforce these orders: (1) CBP posts written guidance, called Trade Alerts, to its intranet to inform ports about new orders, and (2) it creates electronic targeting instructions that alert ports about incoming shipments that need to be examined for potential infringing goods related to the order. When its exams identify goods that should be excluded, CBP does not allow the goods to enter the country and issues a notice of exclusion to the importer. According to CBP officials, CBP does not maintain data on the number of exclusion notices, either in total or by order, nor does it alert the rights holder of the exclusion. However, CBP does maintain data on the total number of exclusion order exams it conducts and the number of times these exams reveal any IP discrepancies. As shown in figure 4, the number of exclusion order exams have declined since fiscal year 2002, and a very small number of discrepancies have been found. CBP explained that the decrease in exams from fiscal years 2002 to 2004 was due to the termination of targeting for one exclusion order that had been generating most of the exams. CBP’s limited and declining enforcement of exclusion orders has been of concern to certain private sector representatives, notably the companies that have sought such orders or the attorneys that represent them. Representatives said companies spend millions of dollars in legal fees to win a U.S. International Trade Commission ruling for their products, but that the effectiveness of the ruling is weakened by poor enforcement at CBP. Private sector representatives also stated that CBP’s enforcement of the orders is not transparent because CBP does not notify companies of any exclusions that have occurred, impeding their ability to follow through on the matter. This differs from CBP’s practices when it detains or seizes IP-infringing goods: CBP notifies both the importer and IP rights owner of such detentions or seizures. CBP officials said the agency does not have a regulation to permit the notification of exclusions to affected rights owners, and they did not know whether CBP had legal authority under the relevant statute to make such notifications. We found several procedural weaknesses in CBP’s exclusion order enforcement, including a lack of intranet Trade Alerts for about half of the orders currently in force, delays in posting Trade Alerts to its intranet, minimal use of electronic targeting, and no procedures for updating Trade Alerts when the status of exclusion orders changes or expires. The effect of these weaknesses has been to limit or delay the degree to which exclusion orders are enforced; details are provided below. CBP does not have Trade Alerts on its intranet for all orders currently in effect and lacks information to develop Trade Alerts for some orders. Of the 66 orders in effect as of July 2007, CBP had posted Trade Alerts to its internal website for 24 of them and was developing such guidance for 5 others. CBP said it had paper records for 15 older orders that it had not yet converted to Trade Alerts due to limited resources, but lacked records for enforcing most of these remaining orders. Although CBP officials said the agency is required to enforce the orders from the date they are issued, we found that CBP’s enforcement may be considerably delayed. According to CBP officials, this is because CBP must review and interpret large amounts of complex information generated by the administrative process, but only two attorneys at CBP are presently qualified to carry out this review. We determined that it took CBP more than 60 days to post Trade Alerts for 14 of 18 orders for which it could provide such data. According to CBP officials, work to establish the intranet platform for IP issues began in 2003, but CBP did not have the capability to actually begin posting Trade Alerts to its Web site until April of 2004. Prior to that date, text-only Alerts were published to an internal electronic bulletin board that housed them for 90-day renewable periods. CBP develops targeting instructions for most, but not all, of the exclusion orders it receives. Of 10 randomly selected orders for which CBP had posted Trade Alerts as of July 2007, we found that it had developed targeting instructions for only 4. Also, although CBP officials said that the agency is to enforce exclusion orders until they expire, we found that its actual targeting instructions for an order may expire far sooner. CBP officials said that targeting instructions that have not generated any exams or found any IP violations after 90 days are removed from CBP’s targeting system. CBP provided data on the number of exclusion orders for which it had targeting instructions in place in each of fiscal years 2003 through 2006. The number of orders with targeting instructions dropped from 25 in fiscal year 2003 to 10 in fiscal year 2006—far fewer than the number of orders in force at that time. CBP has no process for ensuring that its Trade Alerts are adjusted to reflect changes in the status of exclusion orders. For example, CBP initially provided data to indicate that it had issued Trade Alerts for 29 orders, but we determined that 5 of the Trade Alerts were for orders that had expired or been rescinded. CBP concurred with our findings and said it would adjust its Trade Alerts accordingly. The number of criminal IP enforcement cases opened annually by ICE, FBI, and FDA during fiscal years 2001 through 2006 have fluctuated, but the enforcement outcomes—arrests, indictments, and convictions—from those cases grew during that same time period. As shown in figure 5, ICE opened the most IP cases each year, averaging 445 cases per fiscal year, compared to FBI’s and FDA’s average of 306 and 39 cases per fiscal year, respectively. The number of IP cases that ICE and FBI opened during the period fluctuated, with the number of ICE cases lower in 2006 than in 2001 and the number of FBI cases in 2006 about the same as their 2001 level. In general, the number of FDA cases grew during this time period. Despite the fluctuations in numbers of IP cases by the two major investigative agencies, the number of arrests, indictments, and convictions stemming from ICE and FBI investigations of IP-related crimes generally increased for fiscal years 2001 through 2006 (see fig. 6), as they did for FDA. For some enforcement actions, the agencies’ investigative activity showed fairly steady growth. For other actions, investigative activity peaked in fiscal year 2004, but had levels in 2006 that were still well above their 2001 levels. As figure 6 illustrates, each agency’s enforcement activity generally increased from fiscal year 2001 to 2006; however, activity levels within and across agencies varied over the 6-year period. DOJ tracks its IP enforcement activity in terms of the number of cases filed, the number of defendants in cases filed, and the number of defendants convicted. While the number of IP cases filed by DOJ fluctuated around 150 from fiscal years 2001 through 2005, the number of cases grew to 204 in fiscal year 2006 (see fig. 7). The results of IP-related cases that DOJ filed during fiscal years 2001 through 2006 varied. Table 2 shows that for fiscal years 2001 through 2006, DOJ received referrals for 3,548 defendants in IP matters from the investigative agencies and filed charges against a total of 1,523 defendants. During this period, a total of 891 defendants were convicted and 373 received prison sentences. According to DOJ officials, the data for the number of IP-related defendants referred to federal prosecutors from investigative agencies should be considered independent of the data for defendants charged with IP violations. Additionally, the difference between the number of referred IP defendants and the number of defendants charged with IP offenses in a given year, or period of years, may be explained in part by the fact that IP suspects may never be charged with IP offenses because they are instead charged with crimes carrying higher statutory maximum sentences, or because the IP charges are dismissed pursuant to plea agreements to more serious charges. We found that over the 6-year period of our review, about 17 percent of the total number of defendants received prison sentences of more than 3 years, while about 45 percent were sentenced to imprisonment of 12 months or less. Agencies have not taken key steps to assess IP enforcement achievements. Specifically, most agencies have not (1) conducted systematic analysis of their enforcement activity, (2) clearly identified which of their efforts relate to a key IP enforcement area—IP crimes that affect public health and safety—nor collected data to track these efforts, or (3) set performance measures or targets for carrying out IP enforcement. These steps are an important part of agencies’ ability to effectively plan and assess their performance and report to Congress and others. Although agencies’ statistics show general increases in the level of seizures, investigations, and prosecutions, they have not taken steps to understand the drivers behind these increases in ways that could better inform management and resource allocation decisions. For example, while all the agencies reported using IP enforcement statistics to compare outputs from one year to the next, our discussions with agency officials revealed that little has been done to systematically examine enforcement statistics. Such analysis might include looking at field offices or regions with higher or lower levels of activity to identify effective enforcement practices and inform resource allocation decisions. It might also include identifying the types of IP crimes that agency staff are enforcing to understand criminal activity and help focus enforcement efforts. Agencies are already collecting some data that could be used to examine enforcement efforts more systematically. In April 2007, we reported that CBP has not analyzed variations in its IP enforcement activity by port or conducted analysis of ports’ relative enforcement outcomes. By analyzing available CBP data, we found pockets of enforcement activity in some areas. For example, a majority of CBP’s seizure actions took place in a limited number of locations, with nearly three-fourths of aggregate seizure value accounted for by only 10 of more than 300 ports. These are a mix of ports, including a few of the nation’s largest and some that are smaller. In this report, we made recommendations to CBP to improve upon and better understand its IP enforcement activity through better analysis. We performed a similar analysis for DOJ using data on the number of defendants charged and number of cases filed by U.S. Attorney’s Offices and also found concentrations of activity for prosecution activity. For example, about 50 percent of IP-related cases were filed by around 10 percent of U.S. Attorney’s Offices during fiscal years 2001 through 2006. The same results were true for the number of defendants charged with IP crimes. We also compared the U.S. Attorney’s Offices with the highest IP enforcement activity with the locations where CHIP units were created as of fiscal year 2006. Of the top 10 offices, ranked by number of IP cases filed in 2006, 7 had CHIP units, and the 2 most active offices had the largest CHIP units, measured by the number of attorneys working in the unit. This analysis suggests that the number of resources in a particular field office contributes to higher levels of activity; however, according to DOJ, other factors, such as crime level, can also affect activity levels. Our analyses illustrate the types of analysis that agencies can perform using their data, and insights they can obtain, to better inform management and resource allocation decisions. DOJ said that it performed similar analysis before deciding where to place CHIP units, but did not provide evidence that it conducts such analysis on a routine basis. While all the agencies collected statistics to report broadly on their IP- related enforcement activities, most of the agencies have not clearly identified which IP enforcement actions relate to public health and safety and do not have data to track their efforts in this area, despite making this a priority enforcement area. By virtue of its mission, FDA’s data on IP- related enforcement specifically reflects its efforts to address IP violations that affect public health and safety. CBP has recently begun to monitor IP seizures related to public health and safety. In January 2008, it released seizure data for fiscal year 2007 that for the first time identified seizures in product categories that may involve public health and safety, e.g., pharmaceutical, electrical articles, and sunglasses. CBP officials told us that defining public health and safety seizures is difficult because not all seizures in a given category pose public health and safety risks, and such risks can be found across a broad range of products. The other agencies lack data for identifying IP enforcement actions related to public health and safety. For example, ICE records IP enforcement under a general data field that applies to all types of IP cases. FBI and DOJ have some sub-categories for the types of IP investigations and prosecutions they pursue, but none is specific to public health and safety. Without specific data and definitions for IP-related enforcement efforts that impact public health and safety, agencies are unable to effectively track outcomes, inform management and resource allocation decisions, and report to Congress on an area of significant public importance. Agencies have also taken few steps to clearly identify performance measures specifically for their efforts related to IP-related enforcement activities or establish performance targets to track their progress towards these efforts. We reviewed agencies’ strategic plans and, while none had specific goals on IP enforcement, the CBP and DOJ plans listed IP enforcement as one issue to be addressed as part of working toward broader enforcement goals. We also examined agencies’ public and internal planning documents or memos for IP enforcement and found that some had goals and objectives, but contained few performance measures or targets. Moreover, most of these are internal agency documents that are not available to the public. Neither ICE nor FDA have any additional planning documents for IP enforcement. We asked agencies how they monitor their performance of IP enforcement activities. Most said they regarded their increasing trends in aggregate IP statistics (or outputs) as indicative of their progress. However, without performance measures related to these statistics, it is not clear how these statistics should be assessed because it is not clear what the agencies sought to achieve. We recognize that establishing measures and setting specific targets in the law enforcement area can be challenging. It is important that agencies carry out law enforcement actions that are based on merit and avoid the appearance that they strive to achieve certain numerical quotas, regardless of case quality. By definition, performance measures are a particular value or characteristic used to quantify a program’s outputs – which describe the products and services delivered over a period of time – or outcomes – which describe the intended result of carrying out the program. A performance target is a quantifiable characteristic that establishes a goal for each measure; agencies can determine the program’s progress, in part, by comparing the program’s measures against targets. The Government Performance and Results Act of 1993 incorporated performance measures as one of its most important features, and the establishment and review of performance measures are a key element of the standards for internal control within the federal government. We believe that measures and targets remain important components of measuring agency performance and enhancing accountability, particularly setting outcome-based measures that provide insight into the effectiveness of agencies’ efforts, not just levels of activity. More refined performance measurements that include outcome measures would allow agencies to better track their IP enforcement performance against their goals and give managers crucial information on which to base their organizational and management decisions. Performance assessment is also important in reporting progress to others, such as the IP Coordinator and NIPLECC. Doing so could help NIPLECC address its strategic planning weaknesses that we previously identified in our November 2006 report. The National Intellectual Property Rights Coordination Center, an interagency mechanism created by the executive branch to improve federal IP enforcement and coordinate investigative efforts between ICE and FBI, has not achieved its mission or maintained the staffing levels set for it upon its creation. The center—intended to collect, analyze, and disseminate IP-related complaints from the private sector to ICE and FBI field offices for investigation—has suffered from a slow start, a lack of common understanding about its purpose and agencies’ roles, and limited private sector complaint information. As a result, the center has gradually shifted its focus toward educating the private sector about federal IP enforcement efforts. Congressional appropriators expressed support for the center’s original concept through various conference reports, which, over time, directed participating agencies to allocate appropriated funds to staff and operate the center. However, staffing levels have declined and the FBI no longer participates in the center. Plans are underway to move the center to a new location in early 2008, and according to officials from the other four key agencies, they have met with ICE to discuss what role their agencies might play in the center in the future. The National Intellectual Property Rights Coordination Center is one of several interagency mechanisms for coordinating federal IP enforcement efforts. Unlike NIPLECC, which was established in law by Congress in 1999, the idea for creating the center arose from the work of the National Security Council’s Special Coordination Group on Intellectual Property Rights and Trade Related Crime, co-chaired by the FBI and legacy Customs. This group was formed in order to implement Presidential Decision Directive 42, issued in 1995, concerning international crime. In 1999, a consensus of the group members resulted in a multi-agency plan to improve the U.S. government’s efforts in IP enforcement, and the center was created. According to ICE officials at the center, the center was directed by legacy Customs and included staff from Customs and FBI. After the formation of DHS, ICE took over legacy Customs’ role in directing the center and providing most of the DHS staff that were assigned to the center. While the center and NIPLECC were both created to improve coordination among law enforcement agencies, the concept for the center gave it a greater operational focus than NIPLECC. The executive branch intended that the center would act as a hub for the collection, analytical support, and dissemination to investigative agencies of IP-related complaints from the private sector, including copyright infringement, trademark infringement, and theft of trade secrets. It envisioned that the center would coordinate and direct the flow of criminal referral reports on IP violations to the participating agencies’ investigative resources in headquarters and the field. In carrying out these roles, the center was expected to help integrate domestic and international law enforcement intelligence, consult regularly with the private sector, and generally act as a resource for IP complaints. Congressional support for the center’s creation and role was noted through directives in various conference reports related to appropriations laws in fiscal years 2001 through 2004. These reports indicate that Congress also expected the center to be a dedicated effort to improve intelligence and analysis related to IP rights violations and gather IP enforcement information from other federal and state law enforcement agencies to augment investigations. Like NIPLECC, the center has had difficulty defining its purpose and carrying out its law enforcement coordination mission. According to ICE, FBI, and DOJ officials and our analysis, the center has not achieved its original mission for several reasons: The center got off to a slow start with limited operations in fiscal year 2000, and it took several years for it to become fully operational. For example, in 2004, we reported that many center staff were reassigned after the events of September 11, 2001, according to an FBI official. In addition, a change in leadership after the formation of DHS and the relocation of the center to new physical space in 2006 further impacted the continuity of the center’s operations. The flow of complaint information from the private sector to the center never materialized sufficiently to make the concept work, according to ICE and FBI officials. We reported in 2004 that the center was not widely used by industry, and this situation has persisted. For example, few of the private sector representatives that we contacted described working through the center to address their IP complaints. Participating agencies never reached agreement on how the center would operate and what their respective roles would be. FBI provided us a copy of a draft memorandum of understanding that it said it presented to ICE in fiscal years 2003, 2004, and 2005, to clarify operating procedures and agency roles. FBI also provided a copy of a 2004 letter from ICE acknowledging receipt of the draft memorandum and associated documents and indicating its intent to meet with FBI to discuss the matter. However, FBI officials said that neither ICE nor DHS followed up with FBI on this issue. ICE officials acknowledged having seen the memorandum of understanding in draft form but had no record or recollection of any discussions being held with FBI to discuss the memorandum. Over time and in the absence of complaint information, the center began focusing on educating the private sector about federal IP enforcement agencies, approaches, and contacts, according to ICE officials at the center. Center staff participate in conferences, training programs, and trade shows around the country in which they disseminate information about federal IP enforcement to the private sector. For example, center staff participated in 60 outreach and training events in fiscal year 2006 and 95 in fiscal year 2007. In addition, in 2007, ICE officials said the center began scheduling training sessions in selected cities around the country in which they bring together appropriate federal, state, and local law enforcement agencies and private sector representatives. The purpose of the training is to explain the region’s IP enforcement structure and strengthen involvement of the participants. Through various conference reports, congressional appropriators supported the creation and staffing of the center by FBI, legacy Customs, and ICE, but agencies’ staffing levels at the center have declined. According to ICE officials, the center’s original concept envisioned 24 staff—16 from Customs and 8 from FBI. They said staff were to include a Director, investigative agents, intelligence analysts, and administrative support. The types of staff envisioned for the center further distinguish it as an operational entity compared to NIPLECC, which is not designed to carry out law enforcement. After the formation of DHS, the 16 Customs positions were transferred to DHS and taken over by as many as 16 ICE staff and 2 CBP staff. However, according to ICE and FBI officials, each agency’s staffing allotment has only periodically met the envisioned levels, and total staff currently at the center are about one-third of the level originally envisioned. Conference reports for fiscal years 2001 through 2004 appropriations bills, at various times, indicated a desire for FBI, legacy Customs, and ICE to allocate funding for staffing and/or operations of the center. For example, in fiscal year 2001, the conference report directed FBI to allocate $612,000 to provide eight positions to the center. In fiscal years 2002, the conference report directed legacy Customs to allocate $5 million to support the hiring of agents dedicated to IP enforcement and to support and enhance the operation of the center. In fiscal year 2003, the conference report directed legacy Customs to allocate $5 million to continue center operations and $1.4 million to expand the center and its staffing. Congressional conferees encouraged Customs to use a portion of the funds to establish the clearinghouse for referrals. In fiscal year 2004, the conference report directed ICE to allocate $6.4 million to the center. We asked agencies how they responded to the conference report directives, with agencies responding as follows: FBI officials told us that the funding enabled them to authorize and begin filling positions noted in the congressional conference reports. FBI filled or nearly filled all eight positions during fiscal years 2001 through 2005. In fiscal year 2006, FBI continued to fill six of the positions, but removed its computers from the center due to security concerns and gradually had its staff spend less time working out of the center. Since fiscal year 2007, due to resource constraints, none of the FBI positions has been filled, and the FBI no longer participates in the center. CBP officials said that their records showed that in fiscal year 2002 legacy Customs placed seven staff (including two agents and four intelligence research specialists) in the center and assigned additional agents and intelligence research specialists to certain field offices and overseas locations to carry out IP enforcement. In fiscal year 2003, Customs officials told us they placed more agents and intelligence analysts in certain field locations and headquarters, but could not provide us with specific numbers. According to the Director of the center, following the formation of DHS, the two CBP positions were filled in 2004 but have been vacant for several years. ICE provided data indicating that, since fiscal year 2004, it spent about $3 million on investigative activities, set aside about $1.9 million for future construction costs for the center, spent about $1.2 million on direct salary costs, and spent the remainder on operating costs for the center. ICE staffing levels at the center have declined from 15 in 2004 to 8 in 2007. In early 2008, ICE plans to move the center to a new location that is being configured specifically for the center and some additional functions. According to ICE officials, the new center will continue to focus on private sector outreach. The role that the center will play in coordinating referrals and investigations among the IP enforcement agencies, however, remains unclear. ICE officials said they view the relocation as an opportunity to return the center to its original concept and purpose. NIPLECC’s IP Coordinator said that as an entity staffed by, and located in, a law enforcement agency, the center can play a role in facilitating law enforcement coordination at an operational level that NIPLECC cannot. However, the IP Coordinator agreed that there are mixed views among IP enforcement agencies about the usefulness of the center. In preparation for the move, ICE officials said they had met with FBI, DOJ, CBP, and FDA to offer them space in the center and ask them to permanently assign staff there; however, agencies’ reactions are mixed. FDA plans to staff one special agent at the center initially and will send additional agents later if its workload at the center justifies additional staff. FDA officials said that the agency decided to staff an agent at the center despite its limited resources because counterfeit drugs pose a significant threat to the public health and are a high priority to FDA. According to an official in FDA’s Office of Criminal Investigations, a significant portion of FDA’s counterfeit drug investigations are conducted jointly with ICE, and the center may facilitate a coordinated law enforcement approach. According to DOJ and FBI officials, staff will not be placed at the center unless there is a more operational focus in addition to the training and outreach currently provided. More specifically, DOJ and FBI would like there to be some initial analysis and investigation after an industry referral is received at the center before information is passed on to field investigative agents. Further, even if FBI sees the center taking a more operational focus, the agency would have to request additional staff resources to be able to assign personnel, since currently none is available. CBP officials said they do not plan to allocate any staff to the center. According to ICE and FDA officials, no discussions have taken place to outline the purpose of the new center or define how agencies would coordinate their enforcement activities at the center. Federal IP enforcement agencies confront growing challenges in protecting the United States against counterfeit and pirated goods. IP crimes appear to be on the rise, and the key law enforcement agencies and FDA need to work efficiently and effectively to contend with this trend. Most federal IP enforcement activity has increased in recent years. However, because IP enforcement is generally not a top agency priority, few resources are dedicated solely to this task, and agencies may spend fewer resources on IP enforcement than on higher priority issues. Despite the general increases in IP enforcement activity, agencies have taken little initiative to improve their data or evaluate their enforcement activity in ways that would enable them to identify and track certain trends or enforcement outcomes, like regional variations in enforcement activity and types of IP-infringing goods commonly enforced. Performing this type of analysis could help the agencies make further improvements in their IP enforcement activity by making more effective management decisions and resource allocations. At the same time, setting performance measures and targets for IP enforcement activities could help the agencies better assess their progress toward their goals. Finally, collecting better data, analyzing them, and reporting on progress toward goals could help make the key IP enforcement agencies more accountable to the public and Congress, particularly regarding their efforts to address IP-infringement that affects public health and safety. The need for such improvements among IP enforcement agencies mirrors weaknesses we found previously with NIPLECC, in which the lack of clarity over performance measures, resource requirements, and oversight responsibilities limited NIPLECC’s ability to prioritize, guide, implement, and monitor the combined efforts of multiple agencies to protect and enforce IP rights. One area where IP enforcement has not increased is CBP’s enforcement of exclusion orders. U.S. companies spend millions of dollars to argue their allegations of IP infringement before the U.S. International Trade Commission, but the Commission relies on CBP to enforce its decisions. CBP has allocated few resources to carry out its role in this complex area, lacks data to track its enforcement of exclusion orders, and has not given sufficient attention to addressing the procedural weaknesses that we identify. Given the potential for these orders to affect large volumes of trade, CBP has a responsibility to improve its enforcement of exclusion orders. As agencies consider ways to further improve federal IP enforcement, the relocation of the National Intellectual Property Rights Coordination Center presents an opportunity for NIPLECC and the key IP enforcement agencies to reassess the need for law enforcement coordination in this area and the best way to achieve it. As part of this discussion, NIPLECC and the agencies need to examine the center’s mission, what outcomes they expect from the center, and what role key agencies should play, if any, in the center’s future. Given Congress’ sustained interest in improving federal IP enforcement and its past support for the center, providing this information to Congress could help better inform Congress about what contributions to IP enforcement it should expect from the center. To better inform management and resource allocation decisions and report on agency achievements, we recommend that the Attorney General and the Secretaries of Homeland Security and Health and Human Services direct their agencies to take the following four actions: For ICE, FBI, FDA, and DOJ: systematically analyze enforcement statistics to better understand variations in IP-related enforcement activity. continue to take steps to better identify IP seizures that pose a risk to the public health and safety of the American people, and collect and report this data throughout the agency and to Congress. For ICE, FBI, and DOJ: take steps to better identify enforcement actions against IP-infringing goods that pose a risk to the public health and safety of the American people, and collect and report this data throughout each agency and to Congress. For CBP, ICE, FBI, and DOJ: establish performance measures and targets for IP-related enforcement activity and report such measures, targets, and actual performance to NIPLECC and Congress. To better inform Congress and affected rights holders regarding its enforcement of exclusion orders and address certain procedural weaknesses, we recommend that the Secretary of Homeland Security direct the Commissioner of CBP to take the following three actions: identify factors currently limiting their enforcement capabilities and develop a strategy for addressing those limitations along with a timeline for implementing the strategy; begin collecting data on the number of exclusions, in total and per examine CBP’s ability to develop regulations to allow notification of exclusions to affected rights holders, and if authorized, develop such regulations. To clarify the mission and structure of the National Intellectual Property Rights Coordination Center, we recommend that the Attorney General and the Secretary of Homeland Security, in consultation with NIPLECC, direct their IP enforcement agencies to take the following three actions: reassess the National Intellectual Property Rights Coordination Center’s mission and how its future performance will be assessed; define agencies’ role in the center and the number and types of resources needed to operate the center; and report to Congress on the center’s redefined purpose, operations, required resources, and progress within 1 year of the center’s relocation. We provided a draft of this report to DHS, DOJ, and HHS for their review and comment. CBP and ICE provided comments through DHS. DHS, CBP, and ICE concurred with our recommendations. DOJ did not indicate whether it agreed or disagreed with our recommendations. HHS commented that it disagreed with our recommendation that FDA develop performance measures and targets for IP enforcement. In light of the agency’s public health and safety mission, we determined that it was inappropriate to require FDA to develop law enforcement-related measures and targets, and no longer recommend this. However, given the importance of understanding the nature of IP violations that affect public health and safety, we now recommend instead that FDA more systematically analyze its IP enforcement statistics (see p. 43). We believe this is a more appropriate recommendation because FDA said that it already monitors its IP enforcement criminal investigations to discern trends. In response to other comments the agencies made, we also modified two recommendations to give the agencies more flexibility in identifying which of their IP enforcement actions relate to public health and safety. Instead of recommending that the agencies create categories and definitions of such actions, as we did in the draft report, we recommend that they take steps to better identify these actions (see p. 43). A summary of each agency’s comments and our evaluation follows. CBP commented that the report inaccurately states that it lacks data and definitions for IP-related enforcement efforts that impact public health and safety, saying it reported this data in its fiscal year 2007 seizure statistics. In response, we modified the final report to note that CBP began reporting on IP seizures related to public health and safety for the first time in January 2008 (see p. 33). CBP also commented that the report’s finding that it lacks performance measures for IP enforcement is not completely accurate and cited its “National IPR Trade Strategy.” We added information to the final report about this document (see p. 34), but continue to believe that CBP needs to incorporate IP enforcement measures and targets into its agency-wide strategic plan, which it has said it intends to do. Finally, CBP repeated comments made about our April 2007 report regarding an analysis that we proposed it could undertake to better understand its enforcement outcomes. We disagreed with CBP’s comments at that time and continue to believe that CBP, and the other agencies, can make better use of existing data to understand their IP enforcement efforts and outcomes. DHS’s written comments and our detailed response appear in appendix IV. DOJ made several comments about ways in which it believes the report understates its IP enforcement achievements. For example, DOJ cited percent increases between select years for certain indicators to demonstrate its increased enforcement results. However, the report takes a more systematic approach to evaluating overall federal IP enforcement efforts by examining multiple indicators at multiple agencies over a 6-year period. We believe that the report’s approach and assessment is fair and valid. DOJ also commented we did not sufficiently acknowledge increases in training and resource allocations for IP enforcement, particularly as relates to its CHIP units. In fact, as was true for the draft report, the final report discusses growth in CHIP units and numbers of IP-trained attorneys (see p. 20). Finally, DOJ commented that the report inaccurately characterizes its efforts to analyze IP enforcement statistics by district. We modified the report to add information that DOJ analyzed IP enforcement statistics when deciding where to place CHIP units; however, DOJ never provided evidence that it conducts such analysis on a routine basis (see p. 33). We continue to believe that systematically conducting such analysis can help DOJ determine whether its allocation of resources is producing the kind of increases in IP enforcement outcomes that it desired. DOJ commented that the report inaccurately describes its efforts to establish performance measures or goals to assess its IP enforcement achievements. In response, we added information to the discussion of performance measurement about certain DOJ documents that contain such goals and measures, and cited again the DOJ task force reports on IP enforcement, which had been mentioned earlier in the report (see p. 34). However, the task force reports contain only recommendations for DOJ action, not goals with associated performance measures. A few of these recommendations are structured like performance goals, such as “target large, complex organizations that commit IP crime” or “prosecute IP offenses that endanger the public’s health or safety,” but the task force report provides no indication of how DOJ will measure progress toward these recommendations. DOJ commented that developing numeric or percentage targets linked to its performance measures could create the potential for case quotas or thresholds. We agree that setting performance measures and targets in the law enforcement arena is difficult, and we added information to the report to further clarify the sensitivities associated with doing this (see p. 35). However, we continue to believe that it is important, and possible, for DOJ to develop performance measures and targets to help it, and others, determine whether its overall IP enforcement efforts are achieving performance goals and focused on the right issues, and whether its resource allocations devoted to this area are contributing to the desired results. DOJ’s written comments and our detailed response appear in appendix V. HHS expressed concerns about setting performance measures and targets that were similar to those raised by DOJ. While we no longer direct this recommendation to FDA, we continue to believe that is it important and possible for law enforcement agencies to set useful performance measures and targets to guide and assess their efforts. FDA’s written comments and our detailed response appear in appendix VI. DHS, DOJ, and HHS also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to appropriate congressional committees and the Secretaries of the Departments of Homeland Security and Health and Human Services; the Attorney General; the Chairman of the U.S. International Trade Commission; and NIPLECC’s IP Coordinator. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4347 or yagerl@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. The Ranking Minority Member of the Senate Subcommittee on Oversight of Government Management, the Federal Workforce and the District of Columbia, Committee on Homeland Security and Governmental Affairs, asked us to (1) examine federal agencies’ roles, priorities, and resources devoted to intellectual property (IP) enforcement, (2) evaluate agencies’ IP-related enforcement statistics and achievements, and (3) examine the status of the National Intellectual Property Rights Coordination Center. Based on our previous work and background research, we determined that the key federal law enforcement agencies carrying out IP enforcement are Customs and Border Protection (CBP), Immigration and Customs Enforcement (ICE), the Federal Bureau of Investigation (FBI), and the Department of Justice (DOJ). In addition, we included the Food and Drug Administration (FDA) due to its role in investigating counterfeit versions of products it regulates. To describe the federal structure that carries out IP enforcement, we met with CBP, ICE, DOJ, FBI, and FDA officials at the agencies headquarters, and, for all agencies except FDA, met with officials in multiple field locations. The locations we visited are not disclosed in this report for law enforcement reasons. We also met with the International IP Enforcement Coordinator (IP Coordinator). We reviewed agency documents to understand policies and practices related to IP enforcement and discussed the processes by which these agencies interact with each other in conducting IP enforcement. We also reviewed prior GAO reports that examined the federal IP enforcement structure, agencies’ role, and key coordinating mechanisms. To determine agencies’ IP enforcement priorities, we examined strategic and other planning documents, including agency memos detailing goals and objectives related to IP enforcement. In some instances, agency documents were law enforcement sensitive; therefore, the details have not been included in the report and only information that was discussed openly in interviews or in public documents and forums has been used. To determine resources dedicated to IP enforcement, we spoke with agency officials, obtained data on the number of staff dedicated to IP enforcement, and analyzed data, where available, on staff time spent on IP enforcement. In particular, we obtained data on (1) the number of criminal investigative case hours that ICE and FDA field offices recorded under codes used to track IP enforcement; and (2) the average number of agents on board that were working IP criminal cases, as reported by FBI field offices. We obtained data covering fiscal years 2001 through 2006, except for FDA investigative case hours for counterfeit products, which the agency has only been tracking since fiscal year 2003. We reviewed these data for obvious errors and consistency with publicly reported data, where possible. When we found discrepancies, we brought them to the attention of relevant agency officials and worked with them to correct the discrepancies before conducting our analyses. On the basis of these efforts, we determined that these data were sufficiently reliable for our purposes. To make similar comparisons across the agencies, we converted ICE and FDA data on criminal case hours into full-time equivalents (FTE) using information that the agencies provided and confirmed with FBI officials that we could use FBI’s measurement as equivalent to the FTE measurement for time spent by ICE and FDA IP investigations. To examine agencies’ IP enforcement activity, we analyzed data from fiscal year 2001 to fiscal year 2006 on CBP IP seizures, penalties, and exclusion activities; the number of criminal cases opened in ICE, FBI, and FDA’s Office of Criminal Investigation field offices that were recorded as IP enforcement cases; ICE, FBI, and FDA arrests, indictments, and convictions stemming from their IP investigations; and the numbers of referrals of IP cases to DOJ from the investigative agencies, IP cases that DOJ filed, defendants charged in those cases, defendants convicted of IP crimes, defendants imprisoned, and sentences awarded. Information on CBP seizures and penalties is drawn from our April 2007 report. In addition, we obtained data from CBP on its Trade Alerts as of July 2007, as well as the number of targeting instructions it had in place for each Trade Alert in each of fiscal years 2003 through 2006 and the number of exams, IP violations, and seizures it has recorded as a result of those instructions. We discussed key law enforcement activities with ICE, FBI, FDA, and DOJ and determined what data the agencies record and what activities they report on internally. We then requested them to extract data from their systems on these key activities when they were performed for IP enforcement. For the most part, investigative agency data reflect activities that are coded as IP enforcement, while DOJ data reflect activities in which key IP enforcement statutes are cited. In general, the agencies said that the data they provided reflected most, but perhaps not all, of their activity related to IP enforcement. In order to collect uniform data on IP enforcement activities, we worked with each agency to develop the parameters by which we would request data from their systems. In addition, we worked with officials at each agency to develop a thorough understanding of the data that we received. We reviewed the data we obtained for obvious errors and consistency with publicly reported data, where possible. When we found discrepancies, we brought them to the attention of relevant agency officials and worked with them to correct the discrepancies before conducting our analyses. For example, we determined that CBP provided information on Trade Alerts that related to Exclusion Orders that were no longer in effect. CBP agreed and revised the number of Trade Alerts on its Web site. Also, the data we report on ICE’s arrests, indictments, and convictions are different from data it has reported publicly in the IP Coordinator’s quarterly IP enforcement updates. ICE officials said that the system from which it obtains this data is a “live system,” meaning that data pulled from the system on different dates may not be the same. ICE officials cited updates to case information as one reason that data might differ over time. In addition, the parameters that ICE advised us to use when requesting ICE’s data on IP enforcement cases differed somewhat from the parameters that ICE used. Finally, we found some inconsistencies with FBI’s IP enforcement data. We discussed these discrepancies with FBI and made changes to the data accordingly. We asked FBI officials familiar with the agency’s IP enforcement efforts to review the final data set for accuracy. We did not find discrepancies with FDA or DOJ data and used the most current data sets they provided for the 6 fiscal years we requested. Based on our discussions of internal controls and ability to address data discrepancies with the agencies, we determined that the data are sufficiently reliable to report IP enforcement activity. To assess federal agencies’ achievements in IP-related enforcement activity, we reviewed agency priorities, goals, and objectives and compared them to the types of data agencies collected. We also asked program officials how they used their IP enforcement data to assess performance and inform management and resource allocation decisions. We also talked to private sector representatives to better understand how counterfeit and piracy affects their businesses and obtain their views on federal IP enforcement. We obtained different company contacts from conferences, federal agencies working with private sector, and our own research. We developed structured interview questions to understand industry views regarding federal IP enforcement efforts and private companies’ own efforts to protect their IP. We selected eight sectors based on our participation in trade conferences and discussions and information from organizations such as the U.S. Chamber of Commerce that have anti- counterfeiting campaigns and are affected by counterfeiting and piracy. We interviewed 22 companies and 8 industry associations across those sectors. The sectors we selected were: consumer electronics, entertainment and media, luxury goods and apparel, health and food, Internet, pharmaceutical, software, and manufacturing. For the most part, we interviewed at least one industry association and two companies in each sector. Most of the companies we spoke with were large companies because the prevalence of their brand in the market has made them targets for counterfeiting and piracy. We analyzed industry interviews using a systematic coding scheme to identify common themes and responses to our questions. To examine the intended purpose and funding of the National Intellectual Property Rights Coordination Center, we met with ICE and FBI officials associated with the center to discuss its evolution, role, and staffing levels; reviewed agency documents that articulated the center’s purpose; and analyzed Congressional budget documents that reflected funding related to the center. Specifically, we reviewed appropriation legislation and related reports of the House and Senate Committees on Appropriations and relevant subcommittees for fiscal years 2001 through 2006 to determine what funds and additional instructions were provided to ICE, FBI, and legacy Customs related to staffing and operating the center. We then requested information from ICE, FBI, and CBP about what funds were received and how the funds were used. We also discussed the center’s future role with ICE, FBI, FDA, and DOJ officials, and the NIPLECC IP Coordinator. We conducted this performance audit from December 2006 through March 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The federal government plays a role in granting protection for and enforcing IP rights. It grants protection by approving patents or registering copyrights and trademarks. These IP rights grant registrants limited exclusive ownership over the reproduction or distribution of protected works (copyright), the economic rewards the market may provide for their creations and products (trademark), or the right to exclude others from using, making, and selling devices that embody a claimed invention (patent). The federal government enforces IP rights by taking actions against those accused of their theft or misuse. Enforcement actions include both civil and criminal penalties. U.S. laws criminalize certain types of IP violations, primarily copyright and trademark violations, and authorize incarceration or fines. These laws are directed primarily toward those who knowingly produce and distribute IP-infringing goods, rather than those who consume such goods. Although U.S. laws do not treat patent violations as a crime, the federal government does take actions to protect patents and authorizes civil enforcement actions against infringers. Table 3 summarizes federal protection and enforcement of IP rights under U.S. law. A number of companies have been affected by counterfeiting and piracy, particularly as criminal activity has increased in recent years. As part of our review of federal IP enforcement efforts, we identified companies and industry associations that are actively involved in anti-counterfeiting and piracy activities. We interviewed 8 industry associations and 22 companies across 8 sectors, including consumer electronics, luxury goods and apparel, pharmaceuticals, and software. The views obtained through these interviews cannot be generalized across sector or industry overall given that our sample size was small. Industry responses produced a mix of views on federal efforts to enforce intellectual property rights, with some companies reporting positively about specific agency actions and others that were more critical of federal actions. A selection of industry views by sector are presented below based on analysis and synthesis of interview responses around common themes. For the most part, each bullet represents a different company or association representative. These views are not direct quotes and have been edited as needed for clarity and readability. Table 4 highlights industry views on the impact of counterfeiting and piracy. Some industry representatives expressed concern about the federal government’s ability to carry out IP enforcement due, in part, to a lack of dedicated resources. While several companies said that federal IP enforcement efforts have increased, 14, or nearly half, of the representatives we contacted said there is a shortage of resources to carry out IP enforcement. For example, one company we interviewed said that CBP has made improvements over the last couple of years, but the scope of its efforts is still not up to the problem, and that more resources are needed to perform risk analysis and modeling to determine the origin of counterfeit goods. Another company representative said that the task is large compared to the federal resources applied, especially because the number of counterfeiters is increasing but federal resources have remained constant. Companies reported increasing their own resources to focus on IP enforcement, with 15 stating that they employ or contract private investigators and/or have in-house resources dedicated to IP investigations and anti-counterfeiting activity. Table 5 highlights specific representatives’ statements about the level of federal resources dedicated to IP enforcement. Representatives from 12 out of 30 companies and associations we interviewed told us that better information sharing is needed between the public and private sector; for example, one company representative said that agencies should let companies know whether the information they pass on to law enforcement is useful. In the case of CBP seizures, some representatives remarked on the need to obtain more detailed information about imports suspected of infringing on their products, such as the origin of the shipments. One company representative commented that it used to get information on suspect products from CBP officers, but it has not received this type of information from CBP recently. One company representative said that the company has referred information to the National Intellectual Property Rights Coordination Center, but has rarely received feedback on whether the information it provided was useful. Another company said that it has to continuously follow up to get updates. Several companies and associations we interviewed remarked that the federal IP enforcement structure is not very clear, and companies, particular smaller ones, have a hard time knowing who to contact for IP issues. For example, one association said that there is no formal process for referring cases to law enforcement and that information on the structure needs to be clearer and more efficient. While larger companies may be more familiar with agencies’ procedures and contacts, smaller companies don’t know where to begin. Another association said that agency responsibilities are unclear and may overlap. Table 6 highlights industry representatives’ general comments on their coordination with federal IP enforcement agencies. Several of the company representatives commented that increased training efforts for federal officials that carry out IP enforcement have strengthened IP enforcement efforts. Table 7 highlights private sector comments on this issue. Industry representatives cited various areas that could be improved upon to increase overall IP enforcement, including a need to better train federal prosecutors and better inform consumers about the risks posed from counterfeit and pirated goods. Table 8 highlights areas private sector representatives identified for improved IP enforcement. The following are GAO’s comments on the Department of Homeland Security’s letter dated February 26, 2008. 1. We discussed health and safety issues with CBP during our review. In January 2008, CBP released seizure data for fiscal year 2007 that for the first time identified seizures in product categories that may involve public health and safety, e.g., pharmaceuticals, electrical articles, and sunglasses. We commend CBP for taking this step and modified our report to reflect this new data (see p. 33). These data are publicly available; therefore, GAO did not have to request them from CBP. We added information to the draft report to state that CBP officials also told us that creating a definition of IP seizures that affect public health and safety is difficult because not all products within a given category necessarily pose such risks and the potential for such risks cuts across a broad range of products (see pp. 33-34). We modified our recommendation to state that CBP should continue to take steps toward better identifying IP seizures that pose a risk to public health and safety of the American people, and collect and report this data throughout the agency and to Congress (see p. 43). 2. We reported on CBP’s IP Rights Trade Strategy (a document that CBP refers to in its letter as the National IPR Trade Strategy) in our April 2007 report. We added information to this report to describe this trade strategy and note that it contains certain measures and indicators related to IP enforcement (see p. 34.) However, we also noted, as we did in our April 2007 report, that CBP officials told us this trade strategy was an internal planning document, and we determined it had limited distribution across CBP. For example, we found that revisions to the document had not been distributed to CBP ports since 2003 and given the document’s status as “For Official Use Only,” it is not distributed to Congress or the public. Therefore, we concluded in April 2007 that this document, while containing certain measures and indicators, has limited usefulness for holding CBP accountable for its performance on IP enforcement. At that time, we recommended that CBP work with OMB to include IP enforcement-related measures in its strategic plan and are pleased that CBP states in its current comment letter that it is in the process of taking such action. 3. We do not understand why CBP is making comments in this report about analysis that appeared in our April 2007 report, but that is not reproduced in this report. That analysis showed that among the top 25 IP-importing ports in fiscal year 2005, many ports’ IP seizure rates (measured by value of IP seizures over value of IP imports) were lower than the group average. We did this analysis because CBP had not attempted to analyze its IP enforcement outcomes in this way. CBP made these same comments in April 2007; we disagreed with how CBP characterized our work at that time and continue to stand by our analysis. In that report, we said that this and other types of analysis contained in our April 2007 report represented approaches that CBP could take to better understand variations in IP enforcement outcomes across ports, inform resource allocations and management decisions, and further improve its IP enforcement. We continue to believe that CBP, and the other agencies, can better use existing data to understand their IP enforcement outcomes across field locations or product types as a way of further improving overall IP enforcement. 4. See comment 1. The following are GAO’s comments on the Department of Justice letter dated February 21, 2008. 1. We disagree that our report severely understates DOJ’s enforcement activities. Our analysis of federal IP enforcement efforts is a systematic evaluation of trends in key agencies’ enforcement indicators over a 6- year period. Although there were fluctuations (i.e., increases and decreases) in individual indicators from year to year, we concluded that all the indicators, when taken as a whole, showed a general increase in federal IP enforcement efforts from the beginning to the end of the period examined. Moreover, we considered all indicators together because no single indicator from any one agency sufficiently reflects overall trends. The DOJ statistics we examined also showed increases and decreases during the time period. However, in its letter, DOJ selected statistics that only reflect increases, and it did so in one instance by comparing the lowest and highest levels for a given indicator, regardless of the year in which they occurred, which generated the highest possible percent increase for that indicator. We do not believe DOJ’s analysis is useful for discerning overall long-term trends. 2. Our report is based on agency data covering fiscal years 2001 through 2006. In several cases, fiscal year 2007 data became available as we were finalizing our report. However, given the challenges we faced in obtaining sufficiently reliable data from all agencies for the period we studied (see appendix I, pp. 48-51), we were unable to systematically update our data in a timely fashion to include fiscal year 2007 statistics. 3. We disagree that our report gives insufficient attention to resource increases at DOJ for IP enforcement. Our report discusses the creation and growing number of CHIP units and also notes that the number of Assistant U.S. Attorneys trained to prosecute IP cases has grown in recent years (see p. 20). 4. We agree with DOJ that statistics alone are not sufficient to accurately show the quality of improvements in IP enforcement activity. This is why we recommend that DOJ and the other agencies develop IP enforcement performance measures and targets to more systematically measure and report on their efforts. While the examples that DOJ provides of recent enforcement cases are useful illustrations of the types of enforcement activity that DOJ has undertaken, they do not provide a complete picture of DOJ’s overall efforts over time. For example, as we state in our report, agencies including DOJ could analyze the types of IP cases it most commonly prosecutes or could report on the number of cases it has prosecuted involving IP crimes that posed a health and safety risk (see p. 32). 5. In section B of its letter (pp. 4-5), DOJ addresses the issue of whether additional IP enforcement resources necessarily result in more IP prosecutions. On page 5 of its letter, DOJ provides information that demonstrates a correlation between increased IP resources in two U.S. Attorney’s Offices and the number of IP cases prosecuted by those units, but then goes on to state that it would not necessarily conclude, as it said GAO did, that more prosecutors in a district results in more prosecutions. We agree that existing data across all U.S Attorney’s Offices with CHIP units does not necessarily show a high correlation between increased CHIP unit resources and increased IP prosecutions, and removed this language. We modified our report to note that various factors, including crime levels, can affect the level of IP enforcement activity (see p. 33). 6. We modified our report to state that DOJ reviewed its data on U.S. Attorney’s Office prosecutions when deciding where to place additional CHIP units. However, at no time during this audit did DOJ indicate, or provide documentation reflecting, that it routinely analyzed IP prosecution data by district. We commend DOJ for examining the fiscal year 2006 IP enforcement activity of two of the CHIP units. Our analysis of DOJ’s data on IP enforcement activity by all 94 U.S. Attorney’s Offices showed a mix of activity among field offices across the 6-year period, including those with CHIP units. We believe that conducting such analysis on a more systematic basis can better inform DOJ about whether its allocation of resources is appropriate and not just inform placement of CHIP units. 7. See comment 5. 8. We mentioned in the draft report, under our discussion of agency priorities that DOJ has established some goals related to its IP enforcement efforts that are contained in an internal agency document not available to the public (see p. 15). We added information to refer again to this in our discussion of performance measures (see p. 35). However, we disagree that DOJ’s Task Force report is replete with goals and measures. The Task Force report makes multiple recommendations for improving IP enforcement efforts, but recommendations are not the same as performance goals, and the report does not contain performance measures. Moreover, many of these recommendations are task-oriented actions rather than outcome- oriented. For example, one of DOJ’s Task Force recommendations is to prosecute aggressively intellectual property offenses that endanger the public’s health or safety; yet, DOJ does not provide any details on how they plan to achieve this recommendation and how they will measure their progress. Further, as we report, DOJ has not taken steps to capture enforcement statistics to assess their progress in this area. As we discuss in our report, strategic planning and assessment requires agencies to articulate outcome-oriented goals and objectives and to develop performance measures and targets that will enable them and others to determine whether they are making progress toward these goals. We added language to our report to better define outcome-oriented performance measures. 9. We added information to the report to further explain the challenges associated with setting performance measures and targets in the law enforcement area (see p. 35). Our example of a performance measure and target was not intended to suggest that the agencies should adopt numerical case quotas or take any steps that would otherwise negatively affect the quality of their investigations. However, because it was interpreted in this way, and distracted attention from the more important discussion of adopting appropriate performance measures and targets, we removed the example. We continue to believe that DOJ can develop reasonable and acceptable measures and targets for IP enforcement. 10. See comments 1 and 4. The following are GAO’s comments on the Department of Health and Human Services letter dated February 25, 2008. 1. While we acknowledged in our draft report that setting performance measures and targets in a law enforcement area is difficult, we added information to further explain why setting such measures and targets is a sensitive issue (see p. 35). We continue to believe that setting performance measures and targets is important, even in the law enforcement environment. However, because FDA’s primary mission is to protect public health and safety, we reconsidered our recommendation that FDA set law enforcement-related measures and targets, and no longer direct this particular recommendation to FDA. 2. We modified our report to provide additional information on the definition of output and outcome performance measures and targets (see p. 35). Our example of a performance measure and target was not intended to suggest that the agencies should adopt numerical case quotas or take any steps that would otherwise negatively affect the quality of their investigations. However, because it was interpreted in this way, and distracted attention from the more important discussion of adopting appropriate performance measures and targets, we removed the example. 3. We commend FDA for monitoring the number of criminal investigations to identify any trends in product areas and to develop an understanding of the scope of counterfeiting in those areas. FDA mentioned for the first time in December 2007 that it conducts such analysis, but given the challenges we faced in obtaining sufficiently reliable data from all agencies for the period we studied (see appendix I, pp 48-51), we did not ask FDA to provide this analysis to us. Given the increasing threat posed by IP-infringing products that affect public health and safety, we believe it is important that the government improves its understanding of this threat. Therefore, we modified our recommendation to ICE, FBI and DOJ that agencies conduct analysis of their IP enforcement outcomes to also address this recommendation to FDA and to clarify that such analysis should be done systematically (see p. 43). Christine Broderick, Assistant Director; Shirley Brothwell; and Adrienne Spahr made significant contributions to this report. Virginia Chanley, Karen Deans, Ernie Jackson, Mark Molino, Jackie Nowicki, Dimple Pajwani, Suneeti Shah, Jena Sinkfield, Stephen Caldwell, Tony DeFrank, Rebecca Gambler, Michael Simon, Tom Costa, and Jennifer Young also provided assistance. | Federal law enforcement actions against criminals who manufacture and distribute counterfeit and pirated goods are important to enforcing intellectual property (IP) rights and protecting Americans from unsafe or substandard products. GAO was asked to: (1) examine key federal agencies' roles, priorities, and resources devoted to IP-related enforcement; (2) evaluate agencies' IP-related enforcement statistics and achievements; and (3) examine the status of the National Intellectual Property Rights Coordination Center. GAO reviewed relevant documents, interviewed officials in five key agencies, and analyzed agency IP enforcement data from fiscal years 2001 through 2006. Five key agencies play a role in IP enforcement, and their enforcement functions include seizures, investigations, and prosecutions. While IP enforcement is generally not their highest priority, IP crimes with a public health and safety risk, such as production of counterfeit pharmaceuticals, is an IP enforcement priority at each agency. Determining agencies' IP enforcement resources is challenging because few staff are dedicated to this area, and not all agencies track staff time spent on IP enforcement. Agencies carry out some enforcement actions through their headquarters, but significant enforcement takes place in the field. Federal enforcement actions generally increased during fiscal years 2001-2006, but the key agencies have not taken key steps to assess their achievements. For example, most have not systematically analyzed their IP enforcement statistics to inform management and resource allocation decisions, collected data on their efforts to address IP crimes that affect public health and safety, or established IP-related performance measures or targets to assess their achievements. Also, Customs and Border Protection's enforcement of exclusion orders, which stop certain IP-infringing goods from entering the country, has been limited due to certain procedural weaknesses. The National Intellectual Property Rights Coordination Center, an interagency mechanism created to coordinate federal investigative efforts, has not achieved its mission and staff levels have decreased. Currently, only one agency participates in the center's activities, which focus on private sector outreach. Agencies have lacked a common understanding of the center's purpose and agencies' roles. The center's upcoming move to a new location presents an opportunity to reconsider its mission. |
Since the end of the Cold War, there has been a shift in the way reserve forces have been used. Previously, reservists were viewed primarily as an expansion force that would supplement active forces during a major war. Today, reservists not only supplement but also replace active forces in military operations worldwide. In fact, DOD has stated that no significant operation can be conducted without reserve involvement. As shown in figure 1, reserve participation in military operations spiked in fiscal years 1991 (Desert Shield and Desert Storm) and 2002 (Noble Eagle and Enduring Freedom). There have been wide differences in the operational tempos of individual reservists in certain units and occupations. Prior to the current mobilization, personnel in the fields of aviation, special forces, security, intelligence, psychological operations, and civil affairs were in high demand, experiencing operational tempos that were two to seven times higher than those of the average reservist. Since September 2001, operational tempos have increased significantly for reservists in all of DOD’s reserve components due to the partial mobilization in effect to support operations Noble Eagle and Enduring Freedom. For each year between fiscal years 1997 and 2002, the reserves on the whole achieved at least 99 percent of their authorized end strength. In 4 of these 6 years, they met at least 100 percent of their enlistment goals. During this time period, enlistment rates fluctuated from component to component. Overall attrition rates have decreased for five of DOD’s six reserve components. Between fiscal years 1997 and 2002, only the Army National Guard experienced a slight overall increase in attrition. The attrition data suggest there has not been a consistent relationship between a component’s average attrition rate for a given year and the attrition rate for that component’s high demand capabilities (which include units and occupations). Attrition rates for high demand capabilities were higher than average in some cases but lower for others. Aviation in the Army National Guard, for instance, has had higher than average attrition for 4 of the 5 years it was categorized as a high demand capability. Preliminary analysis of income changes reported by reservists who mobilized or deployed for past military operations indicates that they experienced widely varying degrees of income loss or gain. The source for this analysis is DOD’s 2000 Survey of Reserve Component Personnel, which predates the mobilization that began in September 2001. The data show that 41 percent of drilling unit members reported income loss during their most recent mobilization or deployment, while 30 percent reported no change and 29 percent reported an increase in income (see table 1). Based on the survey data, DOD estimated that the average total income change for all members (including losses and gains) was almost $1,700 in losses. This figure should be considered with caution because of the estimating methodology that was used and because it is unclear what survey respondents considered as income loss or gain in answering this question. Further, reservists are mobilized or deployed for varying lengths of time, which can affect their overall income loss or gain. About 31 percent of all reservists who had at least one mobilization or deployment had been mobilized or deployed for less than 1 month. For the entire population, members spent an estimated 3.6 months mobilized or deployed for their most recent mobilization. DOD’s preliminary analysis of the survey data show that certain groups reported greater losses of income on average. Self-employed reservists reported an average income loss of $6,500. Physicians/registered nurses, on the whole, reported an average income loss of $9,000. Physicians/registered nurses in private practice reported an average income loss of $25,600. Income loss also varied by reserve component and pay grade group. Average self-reported income loss ranged from $600 for members of the Air National Guard up to $3,800 for Marine Corps Reservists. Senior officers reported an average income loss of $5,000 compared with $700 for junior enlisted members. When asked to rank income loss among other problems they have experienced during mobilization or deployment, about half of drilling unit members ranked it as one of their most serious problems. DOD’s preliminary analysis presents little data on those groups who reported overall income gain. Two groups who were identified as reporting a gain were clergy and those who worked for a family business without pay. Concerns were raised following the 1991 Gulf War that income loss would adversely affect retention of reservists. According to a 1991 DOD survey of reservists activated during the Gulf War, economic loss was widespread across all pay grades and military occupations. In response to congressional direction, DOD in 1996 established the Ready Reserve Mobilization Income Insurance Program, an optional, self-funded income insurance program for members of the Ready Reserve ordered involuntarily to active duty for more than 30 days. Reservists who elected to enroll could obtain monthly coverage ranging from $500 to $5,000 for up to 12 months within an 18-month period. Far fewer reservists than DOD expected enrolled in the program. Many of those who enrolled were activated for duty in Bosnia and, thus, entitled to almost immediate benefits from the program. The program was terminated in 1997 after going bankrupt. We reported in 1997 that private sector insurers were not interested in underwriting a reserve income mobilization insurance program due to concerns about actuarial soundness and unpredictability of the frequency, duration, and size of future call-ups. Certain coverage features would violate many of the principles that private sector insurers usually require to protect themselves from adverse selection. These include voluntary coverage and full self-funding by those insured, the absence of rates that differentiated between participants based on their likelihood of mobilization, the ability to choose coverage that could result in full replacement of their lost income rather than those insured bearing some loss, and the ability to obtain immediate coverage shortly before an insured event occurred. According to DOD officials, private sector insurers remain unsupportive of a new reserve income insurance mobilization program and the amount of federal underwriting required for the program is prohibitive. The Department has no plans to implement a new mobilization insurance program. A 1998 study by RAND found that income loss, while widespread during the Gulf War, did not have a measurable effect on enlisted retention. The study was cautiously optimistic that mobilizing the reserves under similar circumstances in the future would not have adverse effects on recruiting and retention. However, the effects of future mobilizations can depend on the mission, the length of time reservists are deployed, the degree of support from employers and family members, and other factors. Certain federal protections, pay policies, and employer practices can help to alleviate financial hardship during deployment. For example, the Soldiers’ and Sailors’ Civil Relief Act caps debt interest rates at 6 percent annually. Income that servicemembers earn while mobilized in certain combat zones is tax-free. For certain operations, DOD also authorized reservists to receive both full housing allowances and per diem for their entire period of activation. In addition, some employers make up the difference between civilian and military pay for their mobilized employees. This practice varies considerably among employers. Servicemembers can also obtain emergency assistance in the form of interest-free loans or grants from service aid societies to pay for basic living expenses such as food or rent during activation. DOD is exploring debt management alternatives, such as debt restructuring and deferment of principle and interest payments, as ways to address income loss. The Army has proposed a new special pay targeting critical health care professionals in the reserves who are in private practice and are deployed involuntarily beyond the established rotational schedule. Reservists who have been activated for previous contingency operations have expressed concerns about the additional burdens placed on their families while they are gone. More than half of all reservists are married and about half have children or other legal dependents. According to the 2000 survey, among the most serious problems reservists said they experienced when mobilized or deployed are the burden placed on their spouse and problems created for their children. The 1991 Gulf War was a milestone event that highlighted the importance of reserve family readiness. Lessons learned showed that families of activated reservists, like their active duty counterparts, may need assistance preparing wills, obtaining power of attorney, establishing emergency funds, and making child care arrangements. They may also need information on benefits and entitlements, military support services, and information on their reemployment rights. DOD has recognized that family attitudes affect reserve member readiness, satisfaction with reserve participation, and retention. Military members who are preoccupied with family issues during deployments may not perform well on the job, which in turn, negatively affects the mission. Research has shown that families of reservists who use family support services and who are provided information from the military cope better during activations. Under a 1994 DOD policy, the military services must “ensure National Guard and Reserve members and their families are prepared and adequately served by their services’ family care systems and organizations for the contingencies and stresses incident to military service.” Although activated reservists and their family members are eligible for the same family support services as their active duty counterparts, they may lack knowledge about or access to certain services. The 2000 DOD survey suggests that more than half of all reservists either believe that family support services are not available to them or do not know whether such services are available. Table 2 shows drilling unit members’ responses on the availability of selected programs and services. According to DOD officials, operations Noble Eagle and Enduring Freedom have highlighted the fact that not all reserve families are prepared for potential mobilization and deployment. They told us that since many families never thought their military members would be mobilized, families had not become involved in their family readiness networks. DOD has found that the degree to which reservists are aware of family support programs and benefits varies according to component, unit programs, command emphasis, reserve status, and the willingness of the individual member to receive or seek out information. Results from the 2000 DOD survey show that about one-fourth of drilling unit members said their arrangements for their dependents were not realistically workable for deployments lasting longer than 30 days. Furthermore, about 4 of every 10 drilling unit members thought it was unlikely or very unlikely that they would be mobilized or deployed in the next 5 years. Again, this survey predates the events of September 11, 2001, and the ensuing mobilization. Among the key challenges in providing family support are the long distances that many reservists live from installations that offer family support services, the difficulty in persuading reservists to share information with their families, the unwillingness of some reservists and their families to take the responsibility to access available information, conflicting priorities during drill weekends that limit the time spent on family support, and a heavy reliance on volunteers to act as liaisons between families and units. In 2000, about 40 percent of drilling unit members lived 50 miles or farther from their home units. DOD has recognized the need for improved outreach and awareness. For example, the Department has published benefit guides for reservists and family members and has enhanced information posted on its Web sites. DOD published a “Guide to Reserve Family Member Benefits” that informs family members about military benefits and entitlements and a family readiness “tool kit” to enhance communication about pre-deployment and mobilization information among commanders, servicemembers, family members, and family program managers. Each reserve component also established family program representatives to provide information and referral services, with volunteers at the unit level providing additional assistance. The U.S. Marine Corps began offering an employee assistance program in December 2002 to improve access to family support services for Marine Corps servicemembers and their families who reside far from installations. Through this program, servicemembers and their families can obtain information and referrals on a number of family issues, including parenting; preparing for and returning from deployment; basic tax planning; legal issues; and stress. Notwithstanding these efforts, we believe, based on our review to date, that outreach to reservists and their families will likely remain a continuing challenge for DOD. Reservists who are mobilized for a contingency operation are confronted with health care choices and circumstances that are more complex than those faced by active component personnel. Reservists’ decisions are affected by a variety of factors—whether they or their spouses have civilian health coverage, the amount of support civilian employers would be willing to provide with health care premiums, and where they and their dependents live. If dependents of reservists encounter increased future difficulties in maintaining their civilian health insurance due to problems associated with longer mobilizations and absence from civilian employment, they may rely on DOD for their health care benefits to a greater degree than they do today. When activated for a contingency operation, reservists and their dependents are eligible for health care benefits under TRICARE, DOD’s managed health care program. TRICARE offers beneficiaries three health care options: Prime, Standard, and Extra. TRICARE Prime is similar to a private HMO plan and does not require enrollment fees or co-payments. TRICARE Standard, a fee-for-service program, and TRICARE Extra, a preferred provider option, require co-payments and annual deductibles. None of these three options require reservists to pay a premium. Benefits under TRICARE are provided at more than 500 military treatment facilities worldwide, through a network of TRICARE-authorized civilian providers, or through non-network physicians who will accept TRICARE reimbursement rates. Reservists who are activated for 30 days or less are entitled to receive medical care for injuries and illnesses incurred while on duty. Reservists who are placed on active duty orders for 31 days or more are automatically enrolled in TRICARE Prime and receive most care at a military treatment facility. Family members of reservists who are activated for 31 days or more may obtain coverage under TRICARE Prime, Standard, or Extra. Family members who participate in Prime obtain care at either a military treatment facility or through a network provider. Under Standard or Extra, beneficiaries must use either a network provider or a non-network physician who will accept TRICARE rates. Upon release from active duty that extended for at least 30 days, reservists and their dependents are entitled to continue their TRICARE benefits for 60 days or 120 days, depending on the members’ cumulative active duty service time. Reservists and their dependents may also elect to purchase extended health care coverage for a period of at least 18, but no more than 36, months under the Continued Health Care Benefit Program. Despite the availability of DOD health care benefits with no associated premium, many reserve family members elect to maintain their civilian health care insurance during mobilizations. In September 2002, we reported that, according to DOD’s 2000 survey, nearly 80 percent of reservists reported having health care coverage when they were not on active duty. Of reservists with civilian coverage, about 90 percent maintained it during their mobilization. Reservists we interviewed often told us that they maintained this coverage to better ensure continuity of health benefits and care for their dependents. Many reservists who did drop their civilian insurance and whose dependents did use TRICARE reported difficulties moving into and out of the system, finding a TRICARE provider, establishing eligibility, understanding TRICARE benefits, and knowing where to go for assistance when questions and problems arose. While reserve and active component beneficiaries report similar difficulties using the TRICARE system, these difficulties are magnified for reservists and their dependents. For example, 75 percent of reservists live more than 50 miles from military treatment facilities, compared with 5 percent of active component families. As a result, access to care at military treatment facilities becomes more challenging for dependents of reservists than their active component counterparts. Unlike active component members, reservists may also transition into and out of TRICARE several times throughout a career. These transitions create additional challenges in ensuring continuity of care, reestablishing eligibility in TRICARE, and familiarizing or re-familiarizing themselves with the TRICARE system. Reservists are also not part of the day-to-day military culture and, according to DOD officials, generally have less incentive to become familiar with TRICARE because it becomes important to them and their families only if they are mobilized. Furthermore, when reservists are first mobilized, they must accomplish many tasks in a compressed period. For example, they must prepare for an extended absence from home, make arrangements to be away from their civilian employment, obtain military examinations, and ensure their families are properly registered in the Defense Enrollment Eligibility Reporting System (DOD’s database system maintaining benefit eligibility status). It is not surprising that many reservists, when placed under condensed time frames and high stress conditions, experience difficulties when transitioning to TRICARE. We recommended in September 2002 that DOD (1) ensure that reservists, as part of their ongoing readiness training, receive information and training on health care coverage available to them and their dependents when mobilized and (2) provide TRICARE assistance during mobilizations targeted to the needs of reservists and their dependents. DOD has added information targeted at reservists to its TRICARE Web site and last month, in response to our recommendation, developed a TRICARE reserve communications plan aimed at outreach and education of reservists and their families. The TRICARE Web site is a robust source of information on DOD’s health care benefits. The Web site contains information on all TRICARE programs, TRICARE eligibility requirements, briefing and brochure information, location of military treatment facilities, toll free assistance numbers, network provider locations and other general network information, beneficiary assistance counselor information, and enrollment information. There is also a section of the Web site devoted specifically to reservists, with information and answers to questions that reservists are likely to have. Results from DOD’s 2000 survey show that about 9 of every 10 reservists have access to the Internet. The TRICARE reserve communications plan’s main goals are to educate reservists and their family members on health care and dental benefits available to them and to engage key communicators in the active and reserve components. The plan identifies a number of tactics for improving how health care information is delivered to reservists and their families. Materials are delivered through direct mailing campaigns, fact sheets, brochures, working groups, and briefings to leadership officials who will brief reservists and to reservists themselves. The plan identifies target audiences and key personnel for information delivery and receipt. The plan identifies methods of measurement which will assist in identifying the degree information is being requested and received. We plan to look at the TRICARE reserve communications plan in more detail as we continue our study. Under DOD authorities in the National Defense Authorization Acts for 2000 and 2001, DOD instituted several demonstration programs to provide financial assistance to reservists and family members. For example, DOD instituted the TRICARE Reserve Component Family Member Demonstration Project to reduce TRICARE costs and assist dependents of reservists in maintaining relationships with their current health care providers. Participants are limited to family members of reservists mobilized for operations Noble Eagle and Enduring Freedom. The demonstration project eliminates the TRICARE deductible and the requirement that dependents obtain statements saying that inpatient care is not available at a military treatment facility before they can obtain nonemergency treatment from a civilian hospital. In addition, DOD may pay a non-network physician up to 15 percent more than the current TRICARE rate. As we continue our study, we plan to review the results of the demonstration project and its impact on improving health care for reservists’ family members. Most reservists have civilian jobs. The 2000 survey shows that 75 percent of drilling unit members worked full-time in a civilian job. Of those with civilian jobs, 30 percent of reservists worked for government at the federal, state, or local level; 63 percent worked for a private sector firm; and 7 percent were self-employed or worked without pay in their family business or farm. The 2000 survey shows that one of the most serious problems reported by reservists in previous mobilizations and deployments was hostility from their supervisor. It should be noted, however, that many employers changed company policies or added benefits for deployed reservists after September 11, 2001. In a small nonprojectable sample of employers, we found that more than half provided health care benefits and over 40 percent provided pay benefits that are not required by the Uniformed Services Employment and Reemployment Rights Act of 1994. Maintaining employers’ continued support for their reservist employees will be critical if DOD is to retain experienced reservists in these times of longer and more frequent deployments. DOD has activities aimed at maintaining and enhancing employers’ support for reservists. The National Committee for Employer Support of the Guard and Reserve serves as DOD’s focal point in managing the department’s relations with reservists and their civilian employers. Two specific functions of this organization are to (1) educate reservists and employers concerning their rights and responsibilities and (2) mediate disputes that may arise between reservists and their employers. Although DOD has numerous outreach efforts, we have found that a sizeable number of reservists and employers were unsure about their rights and responsibilities. For example, a 1999 DOD survey found that 31 percent of employers were not aware of laws protecting reservists. In a recent report, we listed several factors that have hampered DOD’s outreach efforts to both employers and reservists. DOD has lacked complete information on who reservists’ employers are; it does not know the full extent of problems that arise between employers and reservists; and it has no assurance that its outreach activities are being implemented consistently. We recommended that DOD take a number of actions to improve the effectiveness of outreach programs and other aspects of reservist-employer relations. DOD concurred with most of these recommendations and has taken some actions. Most notably, DOD is moving ahead with plans to collect employer data from all of its reserve personnel. The data, if collected as planned, should help DOD inform all employers of their rights and obligations, identify employers for recognition, and implement proactive public affairs campaigns. However, DOD has not been as responsive to our recommendation that the services improve their compliance with DOD’s goal of issuing orders 30 days in advance of deployments so that reservists can notify their employees promptly. While our recommendation acknowledged that it will not be possible to achieve the 30-day goal in all cases, our recommendation was directed at mature, ongoing contingency mobilization requirements, such as the requirements that have existed in Bosnia since 1995. We believe that DOD needs to return to its 30-day goal following the current crisis or it will risk losing employer support for its reserve forces. I would like to take a moment, Mr. Chairman, to address the issue of reservists who are students. Almost one-fourth of drilling unit members responding to DOD’s 2000 survey said they were currently in school. While DOD has an active program to address problems that arise between reservists and their civilian employers, there is no federal statute to protect students. Student members of the reserves are not guaranteed refunds of tuition and fees paid for the term they cannot complete, and there is no federal statute for partial course credit or the right to return to the college or university upon completion of active service. Based on our recent work, we recommended that DOD add students as a target population to the mission and responsibilities of the National Committee for Employer Support of the Guard and Reserve, study in depth the problems related to deployments that student reservists have experienced, and determine what actions the National Committee for Employer Support of the Guard and Reserve might take to help students and their educational institutions. We feel DOD is giving this issue an appropriate amount of attention given its resources. Employer Support of the Guard and Reserve volunteers are directing students to available resources and the Office of the Assistant Secretary of Defense for Reserve Affairs has added student information and hyperlinks to its official Web site. One available resource, for example, is the Servicemembers Opportunity Colleges, which has volunteered to mediate any disputes that arise between reservists and their schools. In addition, 12 states have enacted laws or policies to protect student reservists since our report was issued last June, making a current total of 15 states with such laws or polices. The current reserve retirement system dates back to 1948 with the enactment of the Army and Air Force Vitalization and Retirement Equalization Act. The act established age 60 as the age at which reserve retirees could start drawing their retirement pay. At the time the act was passed, age 60 was the minimum age at which federal civil service employees could voluntarily retire. Active component retirees start drawing their retirement pay immediately upon retirement. Several proposals have been made to change the reserve retirement eligibility age. In 1988, the 6th Quadrennial Review of Military Compensation concluded that the retirement system should be changed to improve retention of mid-career personnel and encourage reservists who lack promotion potential or critical skills to voluntarily leave after 20 years of service. The study recommended a two-tier system that gives reserve retirees the option of electing to receive a reduced annuity immediately upon retirement or waiting until age 62 to begin receiving retirement pay. Recent legislative proposals have called for lowering the retirement pay eligibility age from 60 to 55, establishing a graduated annuity, or establishing an immediate annuity similar to that in the active duty military retirement system. Mr. Chairman, I would like to make two observations about reforming the reserve retirement system. First, equity between reservists and active duty personnel is one consideration in assessing competing retirement systems, but it is not the only one. Other important considerations are the impact of the retirement system on the age and experience distribution of the force, its ability to promote flexibility in personnel management decisions and to facilitate integration between the active and reserve components, and the cost. Changes to the retirement system could prove to be costly. Last year, the Congressional Budget Office estimated that lowering the retirement pay eligibility age from age 60 to 55 would cost $26.6 billion over 10 years. Second, DOD currently lacks critical data needed to assess alternatives to the existing retirement system. According to a 2001 study conducted for the 9th Quadrennial Review of Military Compensation, DOD should (1) assess whether the current skill, experience, and age composition of the reserves is desirable and, if not, what it should look like now and in the future and (2) develop an accession and retention model to evaluate how successful varying combinations of compensation and personnel management reforms would be in moving the reserves toward that preferred composition. DOD has contracted with RAND and the Logistics Management Institute to study military retirement. RAND will review alternative military retirement systems recommended by past studies, develop a model of active and reserve retirement and retention, analyze their likely effects on the retirement benefits that individuals can expect to receive, and identify and analyze the obstacles and issues pertaining to the successful implementation and therefore the viability of these alternatives. The Logistics Management Institute will assess alternative retirement systems with a focus on portability, vesting, and equity. These studies are looking at seven alternatives to the reserve retirement system. Preliminary results from these studies are expected later this year. As discussed with your offices, we plan to review the reserve retirement system in the future. Mr. Chairman, this completes our prepared statement. We would be happy to respond to any questions you or other members of the Subcommittee may have at this time. | Since the end of the Cold War, there has been a shift in the way reserve forces have been used. Previously, reservists were viewed primarily as an expansion force that would supplement active forces during a major war. Today, reservists not only supplement but also replace active forces in military operations worldwide. Citing the increased use of the reserves to support military operations, House Report 107-436 accompanying the Fiscal Year 2003 National Defense Authorization Act directed GAO to review compensation and benefits for reservists. In response, GAO is reviewing (1) income protection for reservists called to active duty, (2) family support programs, and (3) health care access. For this testimony, GAO was asked to discuss its preliminary observations. GAO also was asked to discuss the results of its recently completed review concerning employer support for reservists. The preliminary results of our review indicate that reservists experience widely varying degrees of income loss or gain when they are called up for a contingency operation. While income loss data for current operations Noble Eagle and Enduring Freedom were not available, data for past military operations show that 41 percent of drilling unit members reported income loss, while 30 percent reported no change and 29 percent reported an increase in income. This information is based on self-reported survey data for mobilizations or deployments of varying lengths of time. As would be expected, the data indicate that certain groups, such as medical professionals in private practice, tend to report much greater income loss than the average estimated for all reservists. Although reservists called up to support a contingency operation are generally eligible for the same family support and health care benefits as active component personnel, reservists and their families face challenges in understanding and accessing their benefits. Among the challenges, reservists typically live farther from military installations than their active duty counterparts, are not part of the day-to-day military culture, and may change benefit eligibility status many times throughout their career. Some of these challenges are unique to reservists; others are also experienced by active component members but may be magnified for reservists. Outreach to reservists and their families is likely to remain a continuing challenge for DOD in the areas of family support and health care, and we expect to look at DOD's outreach efforts in more detail as we continue our study. Outreach is also a critical component of maintaining and enhancing employers' support for reservists. Although DOD has numerous outreach efforts, we found that a sizeable number of reservists and employers were unsure about their rights and responsibilities. For example, a 1999 DOD survey found that 31 percent of employers were not aware of laws protecting reservists. Several factors have hampered DOD's outreach efforts to both employers and reservists. However, DOD is taking positive actions in this area, such as moving ahead with plans to collect employer data from all reserve personnel. |
The Administrative Office of the U.S. Courts is an organization within the judicial branch which serves as the central support entity for federal courts, and is supervised by the Judicial Conference of the United States. The Judicial Conference serves as the judiciary’s principal policy-making body and recommends national policies and legislation, including recommending additional judgeships to Congress. The U.S. Courts Design Guide (Design Guide) specifies the judiciary’s criteria for designing new court facilities and sets the space and design standards for court-related elements of courthouse construction. In 1993, the judiciary also developed a space planning program called AnyCourt to determine the amount of court-related space the judiciary will request for a new courthouse based on Design Guide standards and estimated staffing levels. GSA and the judiciary plan new federal courthouses based on the judiciary’s estimated 10-year judge and space requirements. For courthouses that are selected for construction, GSA typically submits two detailed project descriptions, or prospectuses, for congressional authorization: one for site and design and the other for construction. Prospectuses are submitted to the Senate Committee on Environment and Public Works and the House Committee on Transportation and Infrastructure for authorization and Congress appropriates funds for courthouse projects, often at both the design and construction phases. GSA manages the construction contract and oversees the work of the construction contractor. After courthouses are occupied, GSA charges the judiciary and any other tenants rent for the occupied space and for their respective share of common areas. Thirty-two of the 33 federal courthouses completed since 2000 include extra square feet of space, totaling 3.56 million square feet—overall, this space represents about 9 average-sized courthouses. The estimated cost to construct this extra space, when adjusted to 2010 dollars, is $835 million, and the annual cost to rent, operate, and maintain it is $51 million. The extra space and its causes are as follows: 1.7 million square feet caused by construction in excess of congressional 887,000 extra square feet caused by the judiciary overestimating the number of judges the courthouses would have in 10 years; and 946,000 extra square feet caused by district and magistrate judges not sharing courtrooms. In addition to higher construction costs, the extra square footage in these 32 courthouses results in higher annual operations and maintenance costs, which are largely passed on to the judiciary and other tenants as rent. Based on our analysis of the judiciary’s rent payments to GSA for these courthouses at fiscal year 2009 rental rates, the extra courtrooms and other judiciary space increase the judiciary’s annual rent payments by $40 million. In addition, our analysis estimates that the extra space cost $11 million in fiscal year 2009 to operate and maintain. Typically, operations and maintenance costs represent from 60 to 85 percent of the costs of a facility over its lifetime, while design and construction costs represent about 5 to 10 percent of these costs. Therefore, the ongoing operations and maintenance costs for the extra square footage are likely to total considerably more in the long run than the construction costs for this extra square footage. GSA cited concerns with our methodology. Our methodology applied GSA’s policies and data directly from original documents and sources, and our cost estimation methodology balanced higher and lower cost construction spaces to create a conservative estimate of the costs associated with the extra space in courthouses. We believe that our findings are presented in a fair and accurate way and illustrate how past problems with the courthouse program could affect future courthouse projects. Twenty-seven of the 33 federal courthouses constructed since 2000 exceed their congressionally authorized size, resulting in about 1.7 million more square feet than authorized. Fifteen of the 33 courthouses exceed their congressionally authorized size by 10 percent or more. In all 7 of the case study courthouses, the increases in building common and other space were proportionally larger than the increases in tenant space, leading to a lower building efficiency than GSA’s target of 67 percent. Efficiency is important because, for a given amount of tenant space, meeting the efficiency target helps control a courthouse’s gross square footage and therefore its costs. According to GSA officials, controlling the gross square footage of a courthouse is the best way to control construction costs. Twelve of the 15 courthouses that exceeded the congressionally authorized gross square footage by 10 percent or more also had total project costs that exceeded the total project cost estimate provided to congressional authorizing committees. Four of the 15 courthouses had total project costs that exceeded the estimate provided to the congressional authorizing committees, at the construction phase, by about 10 percent or more. GSA’s annual appropriations acts include a provision stating that GSA may increase spending for a project in an approved prospectus by more than 10 percent if GSA obtains advance approval from the Committee on Appropriations. While GSA sought approval from the appropriations committees for the cost increases incurred for these 4 courthouses, GSA did not explain to these committees that the courthouses were larger than authorized and therefore did not attribute any of the cost increase to this difference. However, there is no statutory requirement for GSA to notify congressional authorizing or appropriations committees if the size exceeds the congressionally authorized square footage. GSA lacked sufficient controls to ensure that the 33 courthouses were constructed within the congressionally authorized gross square footage. Initially, GSA had not established a consistent policy for how to measure gross square footage. GSA established a policy for measuring gross square footage by 2000, but has not ensured that this space measurement policy was understood and followed. Moreover, GSA has not demonstrated it is enforcing this policy because all 6 courthouses completed since 2007 exceed their congressionally authorized size. According to GSA officials, the agency did not focus on ensuring that the authorized gross square footage was met in the design and construction of courthouses until 2007. According to a GSA official, at times, courthouses were designed to meet various design goals without an attempt to limit the size of the building common or other space to the square footage allotted in the plans provided to congressional authorizing committees – and these spaces may have become larger to serve a design goal as a result. Another element of GSA’s lack of oversight in this area was that GSA relied on the architect to validate that the courthouse’s design was within the authorized gross square footage without ensuring that the architect followed GSA’s policies for how to measure certain commonly included spaces, such as atriums. Although GSA officials emphasized that open space for atriums would not cost as much as space completely built out with floors, these officials also agreed that there are costs associated with constructing and operating atrium space. Though not a result of a lack of oversight, one additional contributor to the construction of more tenant space than planned is that the judiciary’s automated space planning tool, AnyCourt, incorporates a standard square footage requirement for each district courtroom. However, according to GSA’s space measurement policy, the amount of a courtroom’s square footage doubles if the courtroom spans two floors. Without a mechanism to adjust AnyCourt’s calculation of a planned courthouse’s square footage to reflect GSA’s space measurement policy when the design includes two- story courtrooms, GSA may not request sufficient gross square footage for courthouses with two-story courtrooms. Recently, GSA has taken some steps to improve its oversight of the courthouse construction process by clarifying its space measurement policies and increasing efforts to monitor the size of courthouse projects during the planning stages. In May 2009, GSA published a revised space assignment policy to clarify and emphasize its policies on counting square footage. In addition, according to GSA officials, GSA established a collaborative effort in 2008 between its Office of Design and Construction and its Real Estate Portfolio Management to establish policy and practices for avoiding inconsistencies. It is not yet clear whether these steps will establish sufficient oversight to ensure that courthouses are planned and constructed within the congressionally authorized square footage. Of the 33 courthouses built since 2000, 28 have reached or passed their 10- year planning period and 23 of those 28 courthouses have fewer judges than estimated. For these 28 courthouses, the judiciary has 119, or approximately 26 percent, fewer judges than the 461 it estimated it would have, resulting in approximately 887,000 extra square feet. The extra space includes courtroom and chamber suites as well as the proportional allocation of additional public, mechanical spaces, and sometimes secure, inside parking space in new courthouses. We identified a variety of factors that led the judiciary to overestimate the number of judges it would have after 10 years, which include: Inaccurate caseload growth projections: In a 1993 report, we questioned the reliability of the caseload projection process the judiciary used. For this report, we were not able to determine the degree to which inaccurate caseload projections contributed to inaccurate judge estimates because the judiciary did not retain the historic caseload projections used in planning the courthouses. Judiciary officials at three of the courthouses we visited indicated that the estimates used in planning for these courthouses inadvertently overstated the growth in district case filings and, hence, the need for additional judges. Challenges predicting how many judges will be located in a courthouse in 10 years: It is difficult to predict, for example, when a judge will take a reduced case-load through senior status or leave the bench entirely. It is also challenging to project how many requested judgeships will be authorized, how many vacancies will be filled, and where new judges will be seated. The judiciary raised concerns that some extra space in courthouses exist because the judiciary did not receive all the new judge authorizations it requested. We recognize that some of the extra courtrooms reflect the historic trend that the judiciary has not received all the additional authorized judges it has requested. Our analysis indicates that courtroom sharing could have reduced the number of courtrooms needed in 27 of the 33 district courthouses built since 2000 by a total of 126 courtrooms—about 40 percent of the total number of district and magistrate courtrooms constructed since 2000. In total, not building these courtrooms, as well as, their associated support, building common, and other spaces, would have reduced construction by approximately 946,000 square feet. Most courthouses constructed since 2000 have enough courtrooms for all of the district and magistrate judges to have their own courtrooms. According to the judiciary’s data, courtrooms are used for case-related proceedings only a quarter of the available time or less, on average. Using the judiciary’s data, we applied generally accepted modeling techniques to develop a computer model for sharing courtrooms. The model ensures sufficient courtroom time for all case-related activities; all time allotted to noncase-related activities, such as preparation time, ceremonies, and educational purposes; and all events cancelled or postponed within a week of the event. The model shows the following courtroom sharing possibilities: 3 district judges could share 2 courtrooms, 3 senior judges could share 1 courtroom, and 2 magistrate judges could share 1 courtroom with time to spare. During our interviews and convening of an expert panel on courtroom sharing, some judges remained skeptical of sharing and raised potential challenges to courtroom sharing, but other judges with sharing experience said they have overcome those challenges when necessary without postponing trials. The primary concern judges cited was the possibility that all courtrooms could be in use by other judges and a courtroom might not be available. To address this concern, we programmed our model to provide more courtroom time than necessary to conduct court business. Additionally, most judges with experience in sharing courtrooms agreed that court staff must work harder to coordinate with judges and all involved parties to ensure everyone is in the correct courtroom at the correct time. Judges who share courtrooms in one district also said that courtroom sharing coordination is easier when there is a great deal of collegiality among judges. Another concern about sharing courtrooms was how the court would manage when judges have long trials. However, when the number of total trials is averaged across the total number of judges, each judge has approximately 15 trials per year, with the median trial lasting 1 or 2 days. Therefore, it is highly unlikely that all judges in a courthouse will simultaneously have long trials. Another concern stated was that sharing courtrooms between district and magistrate judges was difficult due to differences in responsibilities and courtroom size. To address this concern, our model separated district and magistrate judges for sharing purposes. In 2008 and 2009, the Judicial Conference adopted sharing policies for future courthouses under which senior district and magistrate judges will share courtrooms at a rate of two judges per courtroom plus one additional duty courtroom for courthouses with more than two magistrate judges. Additionally, the conference recognized the greater efficiencies available in courthouses with many courtrooms and recommended that in courthouses with more than 10 district judges, district judges also share. Our model’s application of the judiciary’s data shows that more sharing opportunities are available. The judiciary stated that at the time the 33 courthouses we reviewed were planned, the judiciary’s policy was for judges not to share courtrooms and that it would be more appropriate for us to apply that policy. Our congressional requesters specifically asked that we consider how a courtroom sharing policy could have changed the amount of space needed in these courthouses. The judiciary also raised concerns with the assumptions and methodology used in developing the courtroom sharing model. We carefully documented the data and parameters throughout our report so that our model could be replicated by anyone with access to the judiciary’s data and familiarity with discrete event simulation. Our model provides one option for developing a sharing policy based on actual time during which courtrooms are scheduled and used. It is important for the federal judiciary to have adequate, appropriate, modern facilities to carry out judicial functions. However, the current process for planning and constructing new courthouses has resulted in the 33 federal courthouses built since 2000 being overbuilt by more than 3.5 million square feet. This extra space not only cost about $835 million in constant 2010 dollars to construct, but has additional annual costs of about $51 million in operations and maintenance and rent that will continue to strain GSA’s and the judiciary’s resources for years to come. This extra space exists because the courthouses, as built, are larger than those congressionally authorized; contain space for more judges than are in the courthouses at least 10 years after the space was planned, and, for the most part, were not planned with a view toward judges sharing courtrooms. Thus, in our report we recommended that the Administrator of GSA take the following three actions: Establish sufficient internal control activities to ensure that regional GSA officials understand and follow GSA’s space measurement policies throughout the planning and construction of courthouses. These control activities should allow for accurate comparisons of the size of a planned courthouse with the congressionally authorized gross square footage throughout the design and construction process. To avoid requesting insufficient space for courtrooms based on the AnyCourt model’s identification of courtroom space needs, establish a process, in cooperation with the Director of the Administrative Office of the U.S. Courts, by which the planning for the space needed per courtroom takes into account GSA’s space measurement policy related to two-story courtrooms when relevant. Report to congressional authorizing committees when the design of a courthouse exceeds the authorized size by more than 10 percent, including the reasons for the increase in size. We also recommend that the Director of the Administrative Office of the U.S. Courts, on behalf of the Judicial Conference of the United States take the following three actions: Retain caseload projections for at least 10 years for use in analyzing their accuracy and incorporate additional factors into the judiciary’s 10-year judge estimates, such as past trends in obtaining judgeships. Expand nationwide courtroom sharing policies to more fully reflect the actual scheduling and use of district courtrooms. Distribute information to judges on positive practices judges have used to overcome challenges to courtroom sharing. GSA and the judiciary agreed with most of the recommendations, but expressed concerns with GAO’s methodology and key findings. GSA concurred with our recommendation to notify the appropriate Congressional committees when the square footage increase exceeds the maximum identified in the prospectus by 10 percent or more. GSA did not concur with our recommendation to establish internal controls to ensure that regional GSA officials understand and follow GSA’s space measurement policies throughout the planning and construction of courthouses; stating that their current controls and oversight are sufficient. The judiciary concurred with our recommendation to expand sharing policies based on a thorough and considered analysis of the data but raised concerns related to the applicability of our model as guidance for its system. The judiciary did not comment directly on its plans to retain caseload projection but stated that it will continue to look for ways to improve its planning methodologies. Finally the judiciary did not provide comment on its intent to distribute information on the positive practices judges have used to overcome challenges to courtroom sharing. Mr. Chairman, this concludes our testimony. We are pleased to answer any questions you might have. For further information on this testimony, please contact Mark L. Goldstein, (202) 512-2834 or by e-mail at goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Keith Cunningham, Assistant Director; Susan Michal-Smith; and Jade Winfree. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The federal judiciary (judiciary) and the General Services Administration (GSA) are in the midst of a multi-billion dollar courthouse construction initiative, which has faced rising construction costs. For 33 federal courthouses completed since 2000, GAO examined (1) whether they contained extra space and any costs related to it; (2) how their actual size compares with the congressionally authorized size; (3) how their space based on the judiciary's 10-year estimates of judges compares with the actual number of judges; and (4) whether the level of courtroom sharing supported by the judiciary's data could have changed the amount of space needed in these courthouses. This testimony is based on GAO's June 2010 report; for that report, GAO analyzed courthouse planning and use data, visited courthouses, modeled courtroom sharing scenarios, and interviewed judges, GSA officials, and others. The 33 federal courthouses completed since 2000 include 3.56 million square feet of extra space consisting of space that was constructed (1) above the congressionally authorized size, (2) due to overestimating the number of judges the courthouses would have, and (3) without planning for courtroom sharing among judges. Overall, this space represents about 9 average-sized courthouses. The estimated cost to construct this extra space, when adjusted to 2010 dollars, is $835 million, and the annual cost to rent, operate and maintain it is $51 million. Twenty seven of the 33 courthouses completed since 2000 exceed their congressionally authorized size by a total of 1.7 million square feet. Fifteen exceed their congressionally authorized size by more than 10 percent, and 12 of these 15 also had total project costs that exceeded the estimates provided to congressional committees. However, there is no requirement to notify congressional committees about size overages. A lack of oversight by GSA, including not ensuring its space measurement policies were followed and a lack of focus on building courthouses within the congressionally authorized size, contributed to these size overages. For 23 of 28 courthouses whose space planning occurred at least 10 years ago, the judiciary overestimated the number of judges that would be located in them, causing them to be larger and costlier than necessary. Overall, the judiciary has 119, or approximately 26 percent, fewer judges than the 461 it estimated it would have. This leaves the 23 courthouses with extra courtrooms and chamber suites that, together, total approximately 887,000 square feet of extra space. A variety of factors contributed to the judiciary's overestimates, including inaccurate caseload projections, difficulties in projecting when judges would take senior status, and long-standing difficulties in obtaining new authorizations. However, the degree to which inaccurate caseload projections contributed to inaccurate judge estimates cannot be measured because the judiciary did not retain the historic caseload projections used in planning the courthouses. Using the judiciary's data, GAO designed a model for courtroom sharing, which shows that there is enough unscheduled courtroom time for substantial courtroom sharing. Sharing could have reduced the number of courtrooms needed in courthouses built since 2000 by 126 courtrooms--about 40 percent of the total number--covering about 946,000 square feet of extra space. Judges raised potential challenges to courtroom sharing, such as uncertainty about courtroom availability, but those with courtroom sharing experience overcame those challenges when necessary, and no trials were postponed. The judiciary has adopted policies for future sharing for senior and magistrate judges, but GAO's analysis shows that additional sharing opportunities are available. For example, GAO's courtroom sharing model shows that there is sufficient unscheduled time for 3 district judges to share 2 courtrooms and 3 senior judges to share 1 courtroom. The recommendations in GAO's related report include: GSA should (1) ensure courthouses are within their authorized size or provide notification when designed spaced exceeds authorized space (2) retain caseload projections to improve the accuracy of 10-year judge planning; and (3) establish and use courtroom sharing policies based on scheduling and use data. GSA and the judiciary agreed with most recommendations, but expressed concerns with GAO's methodology and key findings. GAO believes these to be sound, as explained in the report. |
The federal judiciary consists of the Supreme Court, 12 geographic circuit courts of appeals, 94 district courts, 91 bankruptcy courts, the Court of International Trade, the Court of Appeals for the Federal Circuit, and the Court of Federal Claims. The federal judiciary’s fiscal year 1996 budget is about $3.3 billion, and on September 30, 1995, it employed about 28,000 persons. For fiscal year 1997, the federal judiciary has requested congressional approval of a budget of about $3.6 billion, including a staff of about 31,000. Governance of the federal judiciary is substantially decentralized. The Judicial Conference of the United States, a body of 27 judges over which the Chief Justice of the United States presides, is the federal judiciary’s principal policymaking body. The Conference’s statutory responsibilities include considering administrative problems of the courts and making recommendations to the various courts to promote uniformity of management procedures and the expeditious conduct of court business. The Conference conducts its work principally through about 25 committees. In September 1993, the Judicial Conference established within its Budget Committee an Economy Subcommittee and charged it with reviewing judiciary operations to achieve greater fiscal responsibility, accountability, and efficiency. Created by Congress in 1939, AOUSC provides a wide range of administrative, legal, and program support services to the federal courts, including budgeting, space and facilities, automation, statistical analysis and reports, financial audit, and program and management evaluation. The AOUSC Director serves as the administrative officer for the courts under the supervision and direction of the Judicial Conference. AOUSC’s staff supports the work of the Conference and its committees, including the Economy Subcommittee. AOUSC provides analyses and recommendations on resource allocations to the Executive Committee of the Judicial Conference, which has final authority for resource allocations. Authorized by the same statute that created AOUSC, each of the 12 geographic judicial circuits has a judicial council with the authority to issue all necessary and appropriate orders for the effective and expeditious administration of justice within their circuits. Within each circuit, the Circuit Executive, whose duties vary by circuit, may have responsibility for conducting studies of the business and administration of the courts within the circuit. Neither Congress nor the Judicial Conference has formally charged chief judges with overall responsibility for the administration of their courts. Nevertheless, according to AOUSC, the chief judges of the appellate, district, and bankruptcy courts are generally expected to exercise whatever administrative authority is necessary for the effective and efficient operation of their individual courts. AOUSC program reviews, including on-site reviews of local court operations, are only one means by which the federal judiciary may assess its highly decentralized operations. The Judicial Conference of the United States, the circuit judicial councils, the chief judge of each court, and court unit executives, such as Chief Probation Officers or clerks of court, may all request AOUSC studies and support or initiate their own reviews and assessments. Such studies may be undertaken by AOUSC staff alone, in conjunction with staff from local courts, or by outside experts and consultants, such as the National Academy of Public Administration. Our work focused on AOUSC’s reviews and assessments, principally on-site reviews of local court operations. The organization of this oversight function within AOUSC has varied since responsibility for court audits and disbursement of judicial funds was transferred from the Department of Justice to AOUSC in 1975. After this transfer, AOUSC created an audit unit to perform routine, cyclical financial audits and management reviews of the courts. Before 1985, this unit reviewed the finances and programs of the courts. Several AOUSC auditors, management analysts, and attorneys conducted the reviews, visiting a district for about 2 to 3 weeks interviewing court personnel and reviewing records. Following an exit conference, the review team prepared a report that usually included recommendations. In 1985, AOUSC placed the financial and program review functions on different review cycles. Soon after the current AOUSC Director’s appointment in July 1985, Chief Justice Warren E. Burger established a committee of four judges to study AOUSC and provide advice on improving the agency. According to the AOUSC Director, judges and court officials whom the committee surveyed said AOUSC was too bureaucratic and controlling in its relationship with the courts. At about the same time, judges were telling the AOUSC Director that AOUSC’s management review process should be more sensitive to matters that are “exclusively the concern of the courts.” In 1988, the AOUSC Director discontinued the Office of Audit and Review, created an Office of Audit to conduct court financial reviews; delegated program review responsibilities to the program units; and established an evaluation unit, now known as the Office of Program Assessment (OPA), to oversee and coordinate program review efforts and to carry out special reviews and investigations. AOUSC’s Office of Audit is to conduct routine, cyclical financial reviews and oversee the work of contract financial auditors. Virtually all AOUSC offices have conducted reviews, special studies, evaluations, and surveys, which varied considerably by functional area. These reviews, often undertaken at the request of individual courts, have covered such issues as costs, budgets, spending, workload, outputs, and program implementation results. Some reviews have resulted in recommendations for improvements in such areas as processes and practices and the use of resources or technology. In responding to your request, our objective was to review AOUSC’s program assessment and efficiency promotion efforts regarding federal court operations. These operations include such court functions as the clerks of court offices, probation and pretrial services, judicial chambers management support, and statistical reporting. Generally, our review focused on AOUSC oversight activities conducted during fiscal years 1992 through 1994. Our approach was threefold. First, we met with top-level AOUSC officials and were briefed on AOUSC’s oversight and management assistance activities. We also met with managers from AOUSC program divisions that conducted program reviews and provided assistance to local courts to discuss oversight and management assistance functions and activities. Senior AOUSC management officials attended most of our meetings with managers from AOUSC’s program divisions. We reviewed manuals and other documentation on the operations and responsibilities of AOUSC’s management and program offices and divisions and AOUSC’s assessment and efficiency promotion activities. Second, using data maintained by the OPA, we identified 376 on-site reviews (excluding financial audits, judgeship surveys, and other nonprogrammatic reviews) conducted by selected program divisionsduring fiscal years 1992 through 1994 and requested reports on all of them. However, 244 of the reviews listed either did not result in written reports, were erroneous entries, had reports that AOUSC could not locate, were duplicate entries in OPA’s data, were never completed, or had reports that were still in draft form when we requested them. In the end, we reviewed 93 of the 132 written reports available to determine how the process followed in these reviews compared to generally accepted government auditing standards. We used a data collection instrument to systematically collect information from the program review reports, including the office that conducted the review, the person or persons who initiated the review and why, issues addressed, problems noted, efficient and effective practices identified, recommendations made, information available on the implementation of any recommendations, and standards and guidelines used for identifying problems and/or efficient and effective practices. Finally, we interviewed judges and court officials from a selection of appellate, district, and bankruptcy courts plus probation offices in a judgmentally selected cross-section of courts of different sizes in three regions of the country—the Northeast, Midwest, and South. We determined court unit size on the basis of fiscal year 1993 workload. For appellate, district, and bankruptcy courts, we used cases filed (rounded to the nearest hundred) as our measure of workload. For probation offices we used the total number of persons under supervision in each probation office (rounded to the nearest hundred). We also interviewed the chairs of four Judicial Conference committees: the Committee on the AOUSC (current and former chairs), the Committee on Court Administration and Case Management, the Committee on Judicial Resources, and the Budget Committee’s Economy Subcommittee (one of the co-chairs). We did our work primarily in Washington, D.C., between October 1994 and March 1996 in accordance with generally accepted government auditing standards. One or more senior AOUSC officials monitored our discussions with Judicial Conference Committee chairs and most of our meetings with the assistant directors of AOUSC’s program offices; however, we believe we were able to independently obtain needed information from the chairs and assistant directors. We obtained written comments from AOUSC on a draft of this report. Its comments are evaluated in this letter and are reprinted in full in appendix II. The need to keep costs down in an era of budgetary constraints has focused attention on the judiciary’s processes to ensure that it is operating as efficiently and effectively as possible. Program and financial reviews are one means of providing judges and managers information on the efficiency and effectiveness of court operations. To ensure that the information from these reviews is reliable, it is important that those conducting the reviews follow generally accepted government auditing standards. From 1988 until 1995, AOUSC’s program review process was decentralized and unstructured and did not always follow these standards. In November 1995, OPA issued written standards for conducting program assessments that, with two exceptions relating to reviewer independence and preparation of reports, track the generally accepted government auditing standards. Certain laws, regulations, and contracts require auditors who audit federal organizations, programs, activities, and functions to follow the generally accepted government auditing standards promulgated by the Comptroller General. The federal judiciary is not specifically required by statute to follow these auditing standards. AOUSC’s Office of Audit, which conducts financial audits, has chosen to follow the standards and also requires its contract auditors to do so. However, prior to November 1995, AOUSC had not prescribed uniform standards for its nonfinancial review activities. The generally accepted government auditing standards are broad statements of auditors’ responsibilities. They relate to both financial and performance audits and include general standards, which relate to the qualifications of the staff, the audit organization’s and the individual auditor’s independence, the exercise of due professional care in conducting the audit and in preparing related reports, and the presence of quality controls; fieldwork standards, which relate to the planning and supervision of the actual work, examination of compliance with laws and regulations, an understanding of management controls in place, and the quality of the evidence gathered during the audit; and reporting standards, which relate to the requirement for written reports and recommendations, the timeliness and contents of reports, the way in which reports are presented, and the distribution of reports. Although the judiciary is not specifically required by statute to follow the generally accepted government auditing standards, AOUSC has followed these standards for its financial audits. AOUSC officials told us they believed the standards were generally appropriate for nonfinancial reviews as well. AOUSC’s financial audit responsibilities are assigned by statute and follow the generally accepted government auditing standards. Until November 1995, however, AOUSC did not require its program offices and divisions to follow standards similar to the generally accepted government auditing standards in conducting program reviews. Each office and division had wide latitude to determine how it would review local court operations, the standards it would use during a review, and whether it would produce a written report. Program units adopted a variety of review approaches, ranging from conducting regular program reviews to having no identifiable review functions. The basic approach for most divisions and offices was one of consultation, with most reviews done at the request of the local court. The generally accepted government auditing standards require that a written report be prepared to communicate the results of the review to all who could act on the report’s recommendations. According to OPA data, 376 program reviews were conducted during fiscal years 1992 through 1994. However, according to OPA officials, only 132 of these resulted in written reports. Of these 132 written reports, 90 (68 percent) were prepared by the Federal Corrections and Supervision Division. One unit, the Contract and Services Division, did not produce reports as a matter of policy. According to OPA data, this division undertook 104 reviews during the 3-year period we reviewed, and it neither required nor produced written reports on the results of its reviews. A 1993 OPA assessment of the Division’s review process noted that its lack of written review results deprived current and successor division management and staff of valuable information about court practices that could help them identify trends, evaluate the success of program changes, and propose new initiatives. In contrast, the Federal Corrections and Supervision Division scheduled routine reviews of probation and pretrial offices, compared their performance to written policies and standards, and produced written reports of the reviews about 90 percent of the time. During the period of our review, AOUSC had no requirement that program units follow up on the implementation of any recommendations made to local courts. We found that follow-up on recommendations was inconsistent. Although the Federal Corrections and Supervision Division generally tracked the implementation of recommendations, most other divisions did not. AOUSC cannot compel a local court to comply with its recommendations. One court resisted upgrading its telephone system for 10 years because it preferred to acquire its system from a specific vendor in a sole-source procurement. AOUSC would not approve a noncompetitive procurement but neither did it require the court to upgrade its costly system, with the court remaining on an expensive lease. The court has only recently replaced its telephone system through a competitive procurement. AOUSC estimated that the new system would save about $133,000 per year. If the new system’s future annual savings had been achieved during the 10 years of the disagreement, the local court and, thus, the judiciary, could have avoided about $1 million in costs. AOUSC officials said such an impasse is unlikely to recur in today’s budget environment. Recognizing that its decentralized review process had resulted in reviews of uneven coverage and quality, AOUSC, through OPA, issued standards in November 1995 for conducting program assessments that, with two exceptions, track the generally accepted government auditing standards. The new standards require some type of written record of the results of any review and require follow-up of any reported significant findings and recommendations. OPA also issued a study guide to help AOUSC program unit staff select court units for review and conduct the reviews, and OPA plans to provide training on the new standards to AOUSC personnel in 1996. Finally, AOUSC has directed each program division to develop and share with OPA an internal assessment plan and to provide OPA with a copy of all assessment reports. However, the new OPA standards do not appear to adequately cover two issues in the generally accepted government auditing standards issued by the Comptroller General: independence of the reviewer and preparation of formal reports. Concerning organizational independence, the generally accepted standards state that program reviewers should be organizationally located outside the staff or line management function of the unit being reviewed. OPA’s standards call for review team members to be organizationally independent only “to the extent feasible.” In commenting on the standard, OPA adds that in cases where a review team is not organizationally independent, consideration should be given by management to having a peer review team evaluate the report prior to its issuance. Concerning preparation of formal reports, the generally accepted standard is that reviewers are to prepare written reports communicating the results of each review. The standard points out that written reports (1) communicate the results of reviews to officials at all levels of government, (2) make the results less susceptible to misunderstanding, (3) make the results available for public inspection, and (4) facilitate follow-up to determine whether appropriate corrective actions have been taken. OPA’s standards, however, allow for formal and informal reports, such as trip reports or memoranda to the files. The OPA standards state that management should require formal reports only when the reviewed organization requests one, significant findings are discovered during the review, or follow-up is required on any of the significant findings and recommendations. Copies of each completed report are to be sent to OPA, which is to summarize them for the Administrative Office Committee of the Judicial Conference and as appropriate for senior management of AOUSC. AOUSC has begun to coordinate its review activities. OPA has established a network of 35 program review officers within AOUSC, which is to meet every 1 to 3 months to discuss assessment issues. These officers are also to serve as focal points for reviews within their respective AOUSC program areas. During fiscal year 1996, AOUSC’s automated travel software is to be modified to permit OPA to monitor on-site visits by AOUSC program units, including information on the purpose of the trips and the locations to be visited. OPA has also initiated a series of “triage” visits to local courts in which a team visits for 2 to 3 days to discuss local operations and AOUSC’s relationship to the local court. The goal is to provide broad coverage of court operations and to ensure that program and administrative division reviews are complemented by broader based surveys and reviews. Selection criteria include (1) locations having comparatively low recent review activity by the key program areas; (2) statistical indicators, such as the presence of unusually high- or low-cost operations, unusual workload patterns or case mix, or case dispositions substantially different from national averages; (3) change of chief judge; (4) change of court clerk; (5) geographical factors; and (6) special request and others. Prior to these visits, OPA is to develop a profile of the court by collecting a variety of workload and budgetary data on the local court, plus copies of prior reviews by AOUSC units. From these triage reports, OPA plans to develop a catalogue of common issues. As of May 1, 1996, OPA had completed triage reviews of six district courts and had two reviews under way. These actions, if consistently implemented, should help address many of the weaknesses in the previous program review process. However, AOUSC standards fall short of the generally accepted auditing standards in that they do not require (1) program assessors to be independent of the unit they are assessing or (2) that assessment reports be distributed to all officials who can act on the findings and recommendations. As an effort to reduce spending, in fiscal year 1996 the federal judiciary requested funding for only 86 percent of the staff it estimated would be needed to handle the expected workload. AOUSC officials estimated that the judiciary’s appropriation for salaries and expenses request of about $2.6 billion would have been $139 million higher if the judiciary had requested funds to staff expected workload at 100 percent of staff needed, as determined by staffing formulas. To assist the various courts and administrative units in operating within this constrained budget, the judiciary has established two complementary focal points for identifying, disseminating, and incorporating more efficient ways of doing business. First, in 1993 the Judicial Conference established an Economy Subcommittee within its Budget Committee to (1) review the judiciary’s budget submission, (2) initiate and pursue studies about ways to economize while continuing to provide a consistently high quality of justice, and (3) be an “honest broker” of ideas relative to economy and efficiency. Second, at about the same time, the Conference’s Judicial Resources Committee and Economy Subcommittee directed AOUSC to undertake a comprehensive review of its work measurement methodology for court staffing to determine how greater efficiencies might be incorporated into the methodology. The Economy Subcommittee is a successor to the District Court Efficiencies Task Force. In 1992, at the request of the Judicial Resources Committee, AOUSC established the task force when it identified wide variations in the work processes and use of staff in district court clerks’ offices. In April 1993, the task force, composed of judges and court unit executives, developed a list of potentially efficient practices that were circulated to all district court chief judges for consideration and possible adoption. This list addressed such areas as jury and personnel management, and space and facilities. In coordination with the Judicial Conference’s various program committees, the Economy Subcommittee has sponsored studies to identify better practices. AOUSC created a support office for the Subcommittee, which has compiled a database of better practices, such as cost containment ideas, including those identified by the District Court Efficiencies Task Force. However, the database can be accessed only by the support office staff. Thus, to use the database to identify ideas about how to operate more efficiently, local courts must make a specific request. The Economy Subcommittee is also to serve as a critical reviewer of the budget requests of the program areas represented by each Judicial Conference committee, such as Defender Services or Automation and Technology. Although the Subcommittee can use these reviews as an opportunity to encourage the adoption of more efficient practice, it cannot require their adoption. The Judicial Resources Committee, in conjunction with the Economy Subcommittee, directed AOUSC to undertake a comprehensive program to ensure that greater efficiencies are incorporated in the staffing formulas, which are based on a work measurement methodology. In response, in 1994, a group of court unit executives, working with AOUSC, undertook a study that resulted in the creation of the methods analysis program (MAP).Managed by the Analytical Services Office (ASO), the program analyzes workload flows, processes, and methods in order to identify better, more efficient practices. For each organization reviewed (such as the clerk of court or probation office), ASO is to develop an overall analysis of the functions performed (such as case intake in the clerk of court office) and a detailed documentation and analysis of the work processes used to accomplish that function. The goal is to identify tasks that can be eliminated, transferred, or done more efficiently. The program includes incentives for local court units to adopt the better practices identified by the analysis. After a better practice has been identified and approved, courts will be encouraged, but not required, to adopt it. After several years, the staffing formula used to allocate staff to local courts is to be revised to reflect the effect of the better practices. To encourage immediate adoption of any practices that reduce costs, local courts may keep a portion of any savings they achieve through adoption of the practices. ASO began applying the program with probation offices and plans to review all the major functions of appellate, district, and bankruptcy courts. It recently completed a study of the case opening function in district and bankruptcy clerks’ offices. According to AOUSC officials, there are a variety of other ways in which information on better practices may be shared within the judiciary. For example, judges and other court personnel may share information on better practices in national and regional meetings, such as the meetings of AOUSC advisory groups, which include personnel from local courts. Internal publications, such as the Federal Court Management Report, training programs, and electronic bulletin boards, may also be used to highlight suggestions and findings. AOUSC’s role in the oversight of court operations is part of the broader structure of court management and governance. Through its support of the Judicial Conference and its committees, and the provision of guidance and advice to court units throughout the nation, AOUSC can provide a national, comparative perspective on court operations. Through its recommended budget allocations, AOUSC can also help to encourage the adoption of efficient practices throughout the federal judiciary. Until recently, AOUSC’s program review approach lacked structure, written guidance, and a central, accurate, and current repository of reviews and reports. The changes being implemented by AOUSC—establishment of a network of program review officers, publication of a study guide, identification of courts and program units to be visited, and revised standards for conducting program reviews—should, if properly implemented, address many of the oversight weaknesses noted above. However, the standards do not completely track the generally accepted government auditing standards issued by the Comptroller General. OPA’s standards allow AOUSC program and court staff to review programs they are responsible for administering. This is inconsistent with the generally accepted standard. OPA’s standards permit reviews to be performed by less-than-independent teams subject to a peer review. However, this approach may not be sufficient to overcome a potential perception of reviewer bias by knowledgeable third parties. OPA standards also allow “managers” to decide whether formal reports are to be issued after program reviews are conducted. This, too, is inconsistent with the generally accepted standard. OPA standards do not ensure that all review team findings will be documented and made available to judiciary officials who can act on them. AOUSC’s previous efforts at internal oversight lacked structure; the recent changes proposed by OPA represent not only a significant improvement but also a significant change in the way oversight has been conducted within AOUSC. Therefore, it would be prudent to monitor how the review officials and program units are implementing the revised system and operating under the new standards. The judiciary’s efforts to identify efficient practices and encourage their adoption by local court units seem appropriate. Many of these efforts are relatively recent and evidence is not yet available for measuring the extent of success. A key measure in this regard will be the number of local courts and program units that adopt “better practices” and either reduce or avoid increasing their budgets. To help ensure that AOUSC’s program assessments meet generally accepted auditing standards, we recommend that the Director of AOUSC direct OPA to amend those standards to provide greater assurance against a potential perception of reviewer bias by knowledgeable third parties and greater assurance that all review team findings will be documented and reported to judiciary officials who can act on them, and check each AOUSC division’s compliance with both its plan for conducting program assessments and the standards and study guide for conducting those assessments. AOUSC provided written comments on a draft of this report, which are printed in full in appendix II. AOUSC said that it agrees with the report’s recommendations and intends to “adopt them without reservation.” AOUSC’s written comments also discuss AOUSC activities that were outside the scope of this review. AOUSC provided technical comments separately, which we incorporated as appropriate. We are sending copies of this report to the Chairman of your Subcommittee, the Chairman and Ranking Minority Members of other relevant House and Senate Committees with oversight and appropriations responsibilities for the federal judiciary, and the Director of the Administrative Office of the U.S. Courts. This report was prepared under the direction of William O. Jenkins, Jr., Assistant Director. Other major contributors are listed in appendix III. If you have any questions about this report, please call me on (202) 512-8777. AOUSC provides a broad spectrum of management, administrative, and program support to the federal courts. AOUSC’s executive staff, which oversees the provision of this support, comprises the Director and two associate directors—an Associate Director who is also General Counsel, and an Associate Director of Management and Operations. The Associate Director and General Counsel supervised the Office of General Counsel (OGC) which provides legal counsel and services to the AOUSC Director and staff, the Judicial Conference and its committees, and to judges and other court officials. Among other services, OGC arranges legal representation for judges and court officials sued in their official capacity and represents AOUSC in bid protests and other administrative litigation. OGC also responds to judges, court officials, Congress, executive branch agencies, and the general public regarding legal inquiries relating to court operations. AOUSC has five management offices—the Office of Audit; Office of Management Coordination; Office of Program Assessment; Office of Judicial Conference Executive Secretariat; and the Office of Congressional, External and Public Affairs. The Office of Audit (OA) is responsible for the conduct of comprehensive financial audits of the courts’ financial operations and systems. This office (1) provides guidance and oversight for the routine, cyclical financial audits performed by an outside contractor at each court every 2-1/2 years. The office is also responsible for special audits, such as those conducted for a change in accountable officer,and for audits of the central financial systems that support all of the courts. The Office of Audit periodically summarizes the results of individual audit reports to identify recurring and systemic problems. The Office of Management Coordination (OMC) provides general management and policy analysis support to the AOUSC Director and the Associate Director, Management and Operations, by conducting studies and providing advice on management, planning, organization, and publications. OMC is also responsible for coordinating and monitoring management improvement efforts agencywide in an effort to enhance organizational performance. In addition, OMC provides staff support and assistance to the Judicial Conference Committee on the Administrative Office. OMC coordinates AOUSC responses to committee recommendations and to suggestions or complaints from judicial officers directed to the committee. The Office of Program Assessment (OPA) is responsible for overseeing and monitoring the review and assessment processes for judiciary programs and operations by providing assistance to AOUSC program offices and divisions conducting reviews of court operations and by establishing and maintaining information reporting systems for these reviews. OPA is also responsible for monitoring AOUSC’s management controls program, which has been established to try to maximize the use of resources and to safeguard assets. In addition, OPA coordinates or conducts special reviews, evaluations, or investigations as requested. The Office of Judicial Conference Executive Secretariat (OJCES) provides staff support and assistance in planning and preparing official records of Judicial Conference meetings. OJCES also provides staff support and assistance to the Judicial Conference’s Executive Committee. In addition, OJCES is responsible for ensuring that AOUSC units provide effective staff support for Judicial Conference committees. The Office of Congressional, External and Public Affairs (OCEPA) is responsible for both the performance and the coordination of activities that involve the relationships of the federal judiciary with Congress, the executive branch, state government entities, the media, bar associations, other legal groups, and the public. OCEPA develops, presents, and promotes legislative initiatives approved by the Judicial Conference; prepares or coordinates responses to all policy or legislative inquiries from Congress; and identifies and monitors congressional activity that might have a major impact upon the federal judiciary. In addition to the five management offices, AOUSC has six broad program offices—the Office of Automation and Technology; the Office of Court Programs; the Office of Facilities, Security and Administrative Services; the Office of Finance and Budget; the Office of Human Resources and Statistics; and the Office of Judges Programs. The Office of Information and Technology (OAT) includes seven offices and divisions, plus two training and support centers. OAT is responsible for the implementation of automated data processing, office automation, and information systems in the judiciary. OAT’s responsibilities include assisting in the formation of the judiciary’s automation plans and budgets, developing and implementing court automated systems, providing liaison services to help ensure that the needs of automation users are met, and overseeing and reporting on the use of the Judiciary Automation Fund. The Office of Court Programs’ (OCP) six offices and divisions are responsible for overseeing and supporting the judiciary’s clerks’ offices, court reporters, court interpreters, librarians, staff and conference attorneys, federal public defenders, and probation and pretrial services officers. OCP also is charged with facilitating the development of Judicial Conference policies regarding court administration, defender services, and probation and pretrial services; providing guidance to the courts by preparing procedural manuals; and conducting on-site reviews of court operations. The Court Administration divisions conduct Post Automation Reviews (PARs) in individual courts, and the Federal Corrections and Supervision Division conducts reviews of the Probation and Pretrial Automated Case Tracking System (PACTS) in individual probation and pretrial services offices. The Defender Services Division provides administrative support for and analyses of Defender Services workload and costs. The responsibilities of the Office of Facilities, Security and Administrative Services’ (OFSAS) six offices and divisions include security plans and operations (in coordination with the U.S. Marshals Service), procurement, property management, printing, nonautomation contracting, space and facilities, and relocation and travel functions. OFSAS provides security advice to the courts, develops procurement regulations, and assists courts in meeting their space needs. The office is also responsible for providing administrative support and services to AOUSC, including personnel services and management of the Thurgood Marshall Federal Judiciary Building. Recently renamed, it was formerly called the Office of Automation and Technology. The Office of Finance and Budget’s (OFB) five offices and divisions are responsible for conducting financial and budgetary analyses of judiciary programs, establishing fiscal and accounting policies for the judiciary, and coordinating the development of the judiciary’s budget request to Congress. Through its Economy Subcommittee Support Office, OFB is responsible for coordinating efforts to improve efficiency and economy in court administration. In addition, OFB is responsible for developing the work measurement formulas used to staff the offices of court clerks and probation pretrial offices and produces judicial impact statements that analyze the potential and actual effects of legislation on the judiciary. The Office of Human Resources and Statistics’ (OHRS) four offices and divisions are responsible for overseeing and managing the judiciary’s human resources and statistics functions, including the administration of personnel, payroll, retirement, and insurance programs. The Analytical Services Office is responsible for studying court work methods in an effort to improve operational efficiency through the judiciary’s new Methods Analysis Program. OHRS also develops training policies for AOUSC and court personnel, administers the new Court Personnel Management System, and analyzes and disseminates court workload data through its Statistics Division. The Office of Judges Programs’ (OJP) five offices and divisions provide administrative services to circuit, district, magistrate, and bankruptcy judges; conduct court surveys to determine the need for additional magistrate and bankruptcy judges; make recommendations regarding long-range planning for the judiciary; and provide staff support for several Judicial Conference committees, such as the Committee on Rules, Practice, and Procedure. OJP also provides technical assistance in chambers and case management, organizes orientation programs for new judges, and assists the Federal Judicial Center in planning and conducting training seminars. James R. Bradley, Senior Evaluator Dana L. DiPrima, Evaluator Rudolf F. Plessing, Senior Evaluator Lucine M. Willis, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Administrative Office of the U.S. Courts' (AOUSC) assessment of local court operations, focusing on whether AOUSC is promoting efficient administrative practices within the judiciary. GAO found that: (1) AOUSC issued uniform written standards for nonfinancial program reviews in November 1995; (2) most of the reviews requested for fiscal years 1992 through 1994 were not conducted in accordance with generally accepted government auditing standards, contained incomplete reports, were not distributed properly, and did not indicate appropriate corrective actions; (3) AOUSC created a network of 35 program review officers that serve as focal points and advisors for review and assessment activities within AOUSC; (4) the Office of Program Assessment (OPA) has initiated a series of court visits to better identify program units needing additional oversight; (5) OPA does not require its program reviewers to be independent of the units they are assessing or to prepare formal reports for each program review; (6) AOUSC has created a program that achieves savings by systematically identifying better, more efficient practices; and (7) it is too early to determine whether the program is having an impact on operational costs. |
Passenger screening is a process by which personnel authorized by TSA inspect individuals and property to deter and prevent the carriage of any unauthorized explosive, incendiary, weapon, or other dangerous item onboard an aircraft or into a sterile area. Passenger screening personnel must inspect individuals for prohibited items at designated screening locations. As shown in figure 1, the four passenger screening functions are X-ray screening of property, walk-through metal detector screening of individuals, hand-wand or pat-down screening of individuals, and physical search of property and trace detection for explosives. Typically, passengers are only subjected to X-ray screening of their carry- on items and screening by the walk-through metal detector. Passengers whose carry-on baggage alarms the X-ray machine, who alarm the walk- through metal detector, or who are designated as selectees—that is, passengers selected by the Computer-Assisted Passenger Prescreening System (CAPPS) or other TSA-approved processes to receive additional screening—are screened by hand-wand or pat-down and have their carry- on items screened for explosives traces or physically searched. The passenger checkpoint screening system is composed of three elements: the people responsible for conducting the screening of airline passengers and their carry-on items—TSOs, the technology used during the screening process, and the procedures TSOs are to follow to conduct screening. Collectively, these elements help to determine the effectiveness and efficiency of passenger checkpoint screening. TSOs screen all passengers and their carry-on baggage prior to allowing passengers access to their departure gates. There are several positions within TSA that perform and directly supervise passenger screening functions. Figure 2 provides a description of these positions. In May 2005, we reported on TSA’s efforts to train TSOs and to measure and enhance TSO performance. We found that TSA had initiated a number of actions designed to enhance passenger TSO, checked baggage TSO, and supervisory TSO training. However, at some airports TSOs encountered difficulty accessing and completing recurrent (refresher) training because of technological and staffing constraints. We also found that TSA lacked adequate internal controls to provide reasonable assurance that TSOs were receiving legislatively mandated basic and remedial training, and to monitor the status of its recurrent training program. Further, we reported that TSA had implemented and strengthened efforts to collect TSO performance data as part of its overall effort to enhance TSO performance. We recommended that TSA develop a plan for completing the deployment of high-speed Internet/intranet connectivity to all TSA airport training facilities, and establish appropriate responsibilities and other internal controls for monitoring and documenting TSO compliance with training requirements. DHS generally concurred with our recommendations and stated that TSA has taken steps to implement them. There are typically four types of technology used to screen airline passengers and their carry-on baggage at the checkpoint: walk-through metal detectors, X-ray machines, hand-held metal detectors, and explosive trace detection (ETD) equipment. The President’s fiscal year 2007 budget request noted that emerging checkpoint technology will enhance the detection of prohibited items, especially firearms and explosives, on passengers. As of December 2006, TSA plans to conduct operational tests of three types of passenger screening technologies within the next year. TSA has conducted other tests in the past; for example, during fiscal year 2005, TSA operationally tested document scanners, which use explosive trace detection technology to detect explosives residue on passengers’ boarding passes or identification cards. TSA decided not to expand the use of the document scanner, in part because of the extent to which explosives traces had to be sampled manually. TSA also plans to begin operational tests of technology that would screen bottles for liquid explosives. We are currently evaluating the Department of Homeland Security’s and TSA’s progress in planning for, managing, and deploying research and development programs in support of airport checkpoint screening operations. We expect to report our results in August 2007. TSA has developed checkpoint screening standard operating procedures, which are the focus of this report, that establish the process and standards by which TSOs are to screen passengers and their carry-on items at screening checkpoints. Between April 2005 and December 2005, based on available documentation, TSA deliberated 189 proposed changes to passenger checkpoint screening SOPs, 92 of which were intended to modify the way in which passengers and their carry-on items are screened. TSA issued six versions of the passenger checkpoint screening SOPs during this period. TSA modified passenger checkpoint screening SOPs to enhance the traveling public’s perception of the screening process, improve the efficiency of the screening process, and enhance the detection of prohibited items and suspicious persons. As shown in table 1, 48 of the 92 proposed modifications to passenger checkpoint screening SOPs were implemented, and the types of modifications made or proposed generally fell into one of three categories—customer satisfaction, screening efficiency, and security. TSA used various processes between April 2005 and December 2005 to modify passenger checkpoint screening SOPs, and a variety of factors guided TSA’s decisions to modify SOPs. TSA’s processes for modifying SOPs generally involved TSA staff recommending proposed modifications, reviewing and commenting on proposed modifications, and TSA senior leadership making final decisions as to whether proposed modifications should be implemented. During our 9-month review period, TSA officials considered 92 proposed modifications to the way in which passengers and their carry-on items were screened, and 48 were implemented. TSA officials proposed SOP modifications based on risk factors (threat and vulnerability information), day-to-day experiences of airport staff, and concerns and complaints raised by passengers. TSA then made efforts to balance security, efficiency, and customer service when deciding which proposed SOP modifications to implement. Consistent with our prior work that has shown the importance of data collection and analyses to support agency decision making, TSA conducted data collection and analysis for certain proposed SOP modifications that were tested before they were implemented at all airports. Nevertheless, we found that TSA could improve its data collection and analysis to assist the agency in determining whether the proposed procedures would enhance detection or free up TSO resources, when intended. In addition, TSA did not maintain complete documentation of proposed SOP modifications; therefore, we could not fully assess the basis for proposed SOP modifications or the reasons why certain proposed modifications were not implemented. TSA officials acknowledged that it is beneficial to maintain documentation on the reasoning behind decisions to implement or reject SOP modifications deemed significant. Proposed SOP modifications were submitted and reviewed under two processes during our 9-month review period, and for each process, TSA senior leadership made the final decision as to whether the proposed modifications would be implemented. One of the processes TSA used to modify passenger checkpoint screening SOPs involved TSA field staff or headquarters officials, and, to a lesser extent, TSA senior leadership, suggesting ways in which passenger checkpoint screening SOPs could be modified. These suggestions were submitted through various mechanisms, including electronic mail and an SOP panel review conducted by TSA airport personnel. (These methods are described in more detail in app. II.) Eighty-two of the 92 proposed modifications were considered under this process. If TSA officials determined, based on their professional judgment, that the recommended SOP modifications—whether from headquarters or the field—merited further consideration, or if a specific modification was proposed by TSA senior leadership, the following chain of events occurred: First, the procedures branch of the Office of Security Operations drafted SOP language for each of the proposed modifications. Second, the draft language for each proposed modification was disseminated to representatives of various TSA divisions for review, and the language was revised as needed. Third, TSA officials tested proposed modifications in the airport operating environment if they found it necessary to: assess the security impact of the proposed modification, evaluate the impact of the modification on the amount of time taken for passengers to clear the checkpoint, measure the impact of the proposed modification on passengers and industry partners, or determine training needs created by the proposed modification. Fourth, the revised SOP language for proposed modifications was sent to the heads of several TSA divisions for comment. Fifth, considering the comments of the TSA division heads, the head of the Office of Security Operations or other TSA senior leadership made the final decision as to whether proposed modifications would be implemented. Another process for modifying passenger checkpoint screening SOPs during our 9-month review period was carried out by TSA’s Explosives Detection Improvement Task Force. The task force was established in October 2005 by the TSA Assistant Secretary to respond to the threat of improvised explosive devices (IED) being carried through the checkpoint. The goal of the task force was to apply a risk-based approach to screening passengers and their baggage in order to enhance TSA’s ability to detect IEDs. The task force developed 13 of the 92 proposed SOP modifications that were considered by TSA between April 2005 and December 2005. The task force solicited and incorporated feedback from representatives of various TSA divisions on these proposed modifications and presented them to TSA senior leadership for review and approval. TSA senior leadership decided that 8 of the 13 proposed modifications should be operationally tested—that is, temporarily implemented in the airport environment for the purposes of data collection and evaluation—to better inform decisions regarding whether the proposed modifications should be implemented. Following the testing of these proposed modifications in the airport environment, TSA senior leadership decided to implement 7 of the 8 operationally tested changes. (The task force’s approach to testing these procedures is discussed in more detail below.) Following our 9-month period of review, the changes that TSA made to its passenger checkpoint screening SOPs in response to the alleged August 2006 liquid explosives terror plot were decided upon by DHS and TSA senior leadership, with some input from TSA field staff, aviation industry representatives, and officials from other federal agencies. Based on available documentation, risk factors (i.e., threats to commercial aviation and vulnerability to those threats), day-to-day experiences of airport staff, and complaints and concerns raised by passengers were the basis for TSA staff and officials proposing modifications to passenger checkpoint screening SOPs. Fourteen of the 92 procedure modifications recommended by TSA staff and officials were based on reported or perceived threats to commercial aviation, and existing vulnerabilities to those threats. For example, the Explosives Detection Improvement Task Force proposed SOP modifications based on threat reports developed by TSA’s Intelligence and Analysis division. Specifically, in an August 2005 civil aviation threat assessment, the division reported that terrorists are likely to seek novel ways to evade U.S. airport security screening. Subsequently, the task force proposed that the pat-down procedure performed on passengers selected for additional screening be revised to include not only the torso area, which is what the previous pat-down procedure entailed, but additional areas of the body such as the legs. The August 2005 threat assessment also stated that terrorists may attempt to carry separate components of an IED through the checkpoint, then assemble the components while onboard the aircraft. To address this threat, the task force proposed a new procedure to enhance TSOs’ ability to search for components of improvised explosive devices. According to TSA officials, threat reports have also indicated that terrorists rely on the routine nature of security measures in order to plan their attacks. To address this threat, the task force proposed a procedure that incorporated unpredictability into the screening process by requiring designated TSOs to randomly select passengers to receive additional search procedures. Following our 9-month review period, TSA continued to use threat information as the basis for proposed modifications to passenger checkpoint screening SOPs. In August 2006, TSA proposed modifications to passenger checkpoint screening SOPs after receiving threat information regarding an alleged terrorist plot to detonate liquid explosives onboard multiple aircraft en route from the United Kingdom to the United States. Regarding vulnerabilities to reported threats, based on the results of TSA’s own covert tests (undercover, unannounced tests), TSA’s Office of Inspection recommended SOP modifications to enhance the detection of explosives at the passenger screening checkpoint. TSA officials also proposed modifications to passenger checkpoint screening SOPs based on their professional judgment regarding perceived threats to aviation security. For example, an FSD recommended changes to the screening of funeral urns based on a perceived threat. In some cases, proposed SOP modifications appeared to reflect threat information analyzed by TSA officials. For example, TSOs are provided with Threat in the Spotlight, a weekly report that identifies new threats to commercial aviation, examples of innovative ways in which passengers may conceal prohibited items, and pictures of items that may not appear to be prohibited items but actually are. TSOs are also provided relevant threat information during briefings that take place before and after their shifts. In addition, FSDs are provided classified intelligence summaries on a daily and weekly basis, as well as monthly reports of suspicious incidents that occurred at airports nationwide. TSA’s consideration of threat and vulnerability—through analysis of current documentation and by exercising professional judgment—is consistent with a risk-based decision-making approach. As we have reported previously, and DHS and TSA have advocated, a risk-based approach, as applied in the homeland security context, can help to more effectively and efficiently prepare defenses against acts of terrorism and other threats. TSA headquarters and field staff also based proposed SOP modifications— specifically, 36 of the 92 proposed modifications—on experience in the airport environment. For example, TSA headquarters officials conduct reviews at airports to identify best practices and deficiencies in the checkpoint screening process. During one of these reviews, headquarters officials observed that TSOs were not fully complying with the pat-down procedure. After discussions with TSOs, TSA headquarters officials determined that the way in which TSOs were conducting the procedure was more effective. In addition, TSA senior leadership, after learning that small airports had staffing challenges that precluded them from ensuring that passengers are patted down by TSOs of the same gender, proposed that opposite-gender pat-down screening be allowed at small airports. Passenger complaints and concerns shared with TSA also served as a basis for proposed modifications during our 9-month review period. Specifically, of the 92 proposed SOP modifications considered during this period, TSA staff and officials recommended 29 modifications based on complaints and concerns raised by passengers. For example, TSA headquarters staff recommended allowing passengers to hold their hair while being screened by the Explosives Trace Portal, after receiving complaints from passengers about eye injuries from hair blowing in their eyes and hair being caught in the doors of the portal. When deciding whether to implement proposed SOP modifications, TSA officials also made efforts to balance the impact of proposed modifications on security, efficiency, and customer service. TSA’s consideration of these factors reflects the agency’s mission to protect transportation systems while also ensuring the free movement of people and commerce. As previously discussed, TSA sought to improve the security of the commercial aviation system by modifying the SOP for conducting the pat-down search. (TSA identified the modified pat-down procedure as the “bulk-item” pat-down.) When deciding whether to implement the proposed modification, TSA officials considered not only the impact that the bulk- item pat-down procedure would have on security, but also the impact that the procedure would have on screening efficiency and customer service. For example, TSA officials determined that the bulk-item pat-down procedure would not significantly affect efficiency because it would only add a few seconds to the screening process. Following our 9-month review period, TSA continued to make efforts to balance security, efficiency, and customer service when deciding whether to implement proposed SOP modifications, as illustrated by TSA senior leadership’s deliberation on proposed SOP modifications in response to the alleged August 2006 liquid explosives terrorist plot. TSA modified the passenger checkpoint screening SOP four times between August 2006 and November 2006 in an effort to defend against the threat of terrorists’ use of liquid explosives onboard commercial aircraft. While the basis for these modifications was to mitigate risk, as shown in table 2, TSA senior leadership considered several other factors when deciding whether to implement the modifications. As TSA senior leadership obtained more information about the particular threat posed by the liquid explosives through tests conducted by DHS’s Science and Technology Directorate and FBI, TSA relaxed the restrictions to allow passengers to carry liquids, gels, and aerosols onboard aircraft in 3-fluid-ounce bottles—and as of November 2006, 3.4-fluid-ounce bottles— that would easily fit in a quart-sized, clear plastic, zip-top bag. TSA senior leadership identified both benefits and drawbacks to this SOP modification, but determined that the balance of security, efficiency, and customer service that would result from these SOP changes was appropriate. As shown in table 2, TSA officials recognize that there are security drawbacks—or vulnerabilities—associated with allowing passengers to carry even small amounts of liquids and gels onboard aircraft. For example, two or more terrorists could combine small amounts of liquid explosives after they pass through the checkpoint to generate an amount large enough to possibly cause catastrophic damage to an aircraft. However, TSA officials stated that doing so would be logistically challenging given the physical harm that the specific explosives could cause to the person handling them, and that suspicion among travelers, law enforcement officials, and airport employees would likely be raised if an individual was seen combining the liquid contents of small containers stored in two or more quart-sized plastic bags. TSA officials stated that at the time of the modifications to the liquid, gels, and aerosols screening procedures, there was consensus among explosives detection experts, both domestically and abroad, regarding TSA’s assumptions about how the explosives could be used and the damage they could cause to an aircraft. TSA officials also stated that after reviewing the intelligence information related to the alleged August 2006 London terror plot— particularly with regard to the capability and intent of the terrorists—TSA determined that allowing small amounts of liquids, gels, and aerosols onboard aircraft posed an acceptable level of risk to the commercial aviation system. Moreover, TSA officials acknowledged that there are vulnerabilities with allowing passengers to carry liquids that are exempted from the 3.4-fluid-ounce limit—such as baby formula and medication— onboard aircraft. TSA officials stated that the enhancements TSA is making to the various other layers of aviation security will help address the security vulnerabilities identified above. For example, TSA has increased explosives detection canine patrols, deployed Federal Air Marshals on additional international flights, increased random screening of passengers at boarding gates, and increased random screening of airport and TSA employees who pass through the checkpoint. TSA also plans to expand implementation of its Screening Passengers by Observation Technique (SPOT) to additional airports. SPOT involves specially trained TSOs observing the behavior of passengers and resolving any suspicious behavior through casual conversation with passengers and referring suspicious passengers to selectee screening. TSA intends for SPOT to provide a flexible, adaptable, risk-based layer of security that can be deployed to detect potentially high-risk passengers based on certain behavioral cues. While professional judgment regarding risk factors, experience in the operating environment, and customer feedback have guided many of the decisions TSA leadership made about which screening procedures to implement, TSA also sought to use empirical data as a basis for evaluating the impact some screening changes could have on security and TSO resources. The TSA Assistant Secretary stated in December 2005 that TSA sought to make decisions about screening changes based on data and metrics—a practice he said TSA would continue. The use of data and metrics to inform TSA’s decision making regarding implementing proposed screening procedures is consistent with our prior work that has shown the importance of data collection and analyses to support agency decision making. Between October 2005 and January 2006, TSA’s Explosives Detection Improvement Task Force sought to collect data as part of an effort to test the impact of seven proposed procedures at selected airports, as noted earlier. These seven proposed procedures were selected because officials believed they would have a significant impact on how TSOs perform daily screening functions, TSO training, and customer acceptability. According to TSA’s chief of security operations, the purpose of testing these procedures in the airport environment was to ensure that TSA was “on the right path” in implementing them. These particular procedures were considered by senior TSA officials as especially important for enhancing the detection of explosives and for deterring terrorists from attempting to carry out an attack. According to TSA, some of the proposed procedures could also free up TSOs so that they could spend more time on procedures for detecting explosives and less time on procedures associated with low security risks, such as identifying small scissors in carry-on bags. The seven proposed procedures tested by the task force reflect both new procedures and modifications to existing procedures, as shown in table 3. Our analysis of TSA’s data collection and data analysis for the seven procedures that were operationally tested identified several problems that affected TSA’s ability to determine whether these procedures, as designed and implemented by TSA, would have the intended effect—to enhance the detection of explosives during the passenger screening process or to free up resources so that explosives detection procedures could be implemented. Although the deterrence of persons intending to do harm is also an intended effect of some proposed SOP modifications, TSA officials said that it is difficult to assess the extent to which implementation of proposed procedures would deter terrorists. The Office of Management and Budget has also acknowledged the difficulty in measuring deterrence, particularly for procedures intended to prevent acts of terrorism. While we agree that measuring deterrence is difficult, opportunities exist for TSA to strengthen its analyses to help provide information on whether the proposed procedures would enhance detection or free up TSO resources, when intended. Screening Passengers by Observation Technique. TSA officials stated that SPOT is intended to both deter terrorists and identify suspicious persons who intend to cause harm while on an aircraft. While we recognize that it is difficult to assess the extent to which terrorists are deterred by the presence of designated TSOs conducting behavioral observations at the checkpoint, we believe that there is an opportunity to assess whether SPOT contributes to enhancing TSA’s ability to detect suspicious persons that may intend to cause harm on an aircraft. One factor that may serve as an indicator that a person intends to do harm on an aircraft is whether that individual is carrying a prohibited item. TSA collected and assessed data at 14 airports for various time periods on the number of prohibited items found on passengers who were targeted under SPOT and referred to secondary screening or law enforcement officials. However, these data collection efforts, alone, did not enable TSA to determine whether the detection of prohibited items would be enhanced if SPOT were implemented because TSA had no means of comparing whether persons targeted by SPOT were more likely to carry prohibited items than persons not targeted by SPOT. To obtain this information, the task force would have had to collect data on the number of passengers not targeted by SPOT that had prohibited items on them. This information could be used to determine whether a greater percentage of passengers targeted under SPOT are found to have prohibited items than those passengers who are not targeted by SPOT, which could serve as one indicator of the extent to which SPOT would contribute to the detection of passengers intending to cause harm on an aircraft. Although it has not yet done so, it may be possible for TSA to evaluate the impact of SPOT on identifying passengers carrying prohibited items. There is precedent in other federal agencies for evaluating the security benefits of similar procedures. For instance, U.S. Customs and Border Protection (CBP) within DHS developed the Compliance Examination (COMPEX) system to evaluate the effectiveness of its procedures for selecting international airline passengers for secondary screening. Specifically, COMPEX compares the percentage of targeted passengers on which prohibited items are found to the percentage of randomly selected passengers on which prohibited items are found. The premise is that targeting is considered to be effective if a greater percentage of targeted passengers are found to possess prohibited items than the percentage of randomly selected passengers, and the difference between the two percentages is statistically significant. CBP officials told us in May 2006 that they continue to use COMPEX to assess the effectiveness of their targeting of international airline passengers. When asked about using a method such as COMPEX to assess SPOT, TSA officials stated that CBP and TSA are seeking to identify different types of threats through their targeting programs. CBP, through its targeting efforts, is attempting to identify passengers with contraband and unauthorized aliens, whereas TSA, through SPOT, is attempting to identify potential high-risk passengers. Additionally, in commenting on a draft of this report, DHS stated that, according to TSA, the possession of a prohibited item is not a good measure of SPOT effectiveness because an individual may not intend to use a prohibited item to cause harm or hijack an aircraft. While it may be possible for a terrorist to cause harm or hijack an aircraft without using a prohibited item, as in the case of the September 11 terrorist attacks, other terrorist incidents and threat information identify that terrorists who carried out or planned to carry out an attack on a commercial aircraft intended to do so by using prohibited items, including explosives and weapons. Therefore, we continue to believe that comparing the percentage of individuals targeted and not targeted under SPOT on which prohibited items are found could be one of several potential indicators of the effectiveness of SPOT. Such a measure may be most useful with regard to the prohibited items that could be used to bring down or hijack an aircraft. TSA officials stated that the agency agrees in principle that measuring SPOT effectiveness, if possible, may provide valuable insights. Unpredictable Screening Process, Bulk-Item Pat-Down Search, and IED Component Search. We found that the task force also could have strengthened its efforts to evaluate the security impact of other proposed procedures—specifically, USP, the bulk-item pat-down search, and the IED component search. For all three of these procedures, the task force did not collect any data during the operational testing that would help determine whether they would enhance detection capability. TSA officials told us that they did not collect these data because they had a limited amount of time to test the procedures because they had to make SOP modifications quickly as part of the agency’s efforts to focus on higher threats, such as explosives, and the TSA Assistant Secretary’s goal of implementing the SOP modifications before the 2005 Thanksgiving holiday travel season. Nevertheless, TSA officials acknowledged the importance of evaluating whether proposed screening procedures, including USP and the bulk-item pat-down, would enhance detection capability. TSA officials stated that covert testing has been used to assess TSOs’ ability to detect prohibited items, but covert testing was not implemented during operational testing of proposed procedures. Office of Inspection officials questioned whether covert testing could be used to test, exclusively, the security benefit of proposed procedures, because TSO proficiency and the capability of screening technology also factor into whether threat objects are detected during covert tests. Four of the five aviation security experts we interviewed acknowledged this limitation but stated that covert testing is the best way to assess the effectiveness of passenger checkpoint screening. In commenting on a draft of this report, DHS stated that, according to TSA, USP is intended to disrupt terrorists’ planning of an attack by introducing unpredictability into the passenger checkpoint screening process, and tools such as covert testing could not be used to measure the effectiveness of USP to this end. While we agree that covert testing may not be a useful tool to assess the impact USP has on disrupting terrorists’ plans and deterring terrorists from attempting to carry out an attack, we continue to believe that covert testing could have been used to assess whether USP would have helped to enhance detection capability during the passenger screening process, which TSA officials stated was another intended result of USP. Although TSA did not collect data on the security impact of the USP and bulk-item pat-down procedures, the task force did collect data on the impact these procedures had on screening efficiency—the time required to perform procedures—and on the reaction of TSOs, FSDs, and passengers to the proposed procedures. These data indicated that the USP procedure took less time, on average, for TSOs to conduct than the procedure it replaced (the random continuous selectee screening process); the revised pat-down procedure took TSOs about 25 seconds to conduct; and that passengers generally did not complain about the way in which both procedures were conducted. With respect to operational testing of the IED component search procedure, TSA was unable to collect any data during the testing period because no IEDs were detected by TSOs at the airports where the testing took place. As with the USP and bulk-item pat-down procedures, TSA could have conducted covert tests during the operational testing period to gather simulated data for the IED search procedure, in the absence of actual data. Selectee Screening Changes and Threat Area Search. Recognizing that some of the proposed procedures intended to enhance detection would require additional TSO resources, TSA implemented several measures aimed collectively at freeing up TSOs’ time so that they could focus on conducting more procedures associated with higher threats— identifying explosives and suspicious persons. For example, TSA modified the selectee screening procedure and the procedure for searching carry-on items—the threat area search—in order to reduce screening time. During an informal pilot of these proposed procedures at 3 airports in November 2005, TSA determined that the proposed selectee screening procedure would reduce search time of each selectee passenger, on average, by about 1.17 minutes at these airports. TSA also determined through this study that the proposed threat area search, on average, took 1.83 minutes to conduct at the participating airports, as compared to the existing target object search that took, on average, 1.89 minutes, and the existing whole bag search that took, on average, 2.37 minutes. Prohibited Items List Changes. Another measure TSA implemented to free up TSO resources to focus on higher threats involved changes to the list of items prohibited onboard aircraft. According to TSA, TSOs were spending a disproportionate amount of TSA’s limited screening resources searching for small scissors and small tools, even though, based on threat information and TSA officials’ professional judgment, such items no longer posed a significant security risk given the multiple layers of aviation security. TSA officials surmised that by not having to spend time and resources physically searching passengers’ bags for low-threat items, such as small scissors and tools, TSOs could focus their efforts on implementing more effective and robust screening procedures that can be targeted at screening for explosives. To test its assumption that a disproportionate amount of TSO resources was being spent searching for small scissors and tools, TSA collected information from several sources. First, TSA reviewed data maintained in TSA’s Performance Management Information System (PMIS), which showed that during the third and fourth quarters of fiscal year 2005 (a 6-month period), TSOs confiscated a total of about 1.8 million sharp objects other than knives or box cutters. These sharp objects constituted 19 percent of all prohibited items confiscated at the checkpoint. Second, based on information provided by FSDs, TSOs, and other screening experts, TSA determined that scissors constituted a large majority of the total number of sharp objects found at passenger screening checkpoints. Third, TSA headquarters officials searched through confiscated items bins at 4 airports and found that most of the scissors that were confiscated had blades less than 4 inches in length. Based on these collective efforts, TSA concluded that a significant number of items found at the checkpoint were low-threat, easily identified items, such as small scissors and tools, and that a disproportionate amount of time was spent searching for these items—time that could have been spent searching for high-threat items, such as explosives. TSA also concluded that because TSOs can generally easily identify scissors, if small scissors were no longer on the prohibited items list, TSOs could avoid conducting time-consuming physical bag searches to locate and remove these items. While we commend TSA’s efforts to supplement professional judgment with data and metrics in its decision to modify passenger checkpoint screening procedures, TSA did not conduct the necessary analysis of the data collected to determine the extent to which the removal of small scissors and tools from the prohibited items list could free up TSO resources. Specifically, TSA did not analyze the data on sharp objects confiscated at the checkpoint along with other relevant factors, such as the amount of time taken to search for scissors and the number of TSOs at the checkpoint conducting these searches, to determine the extent to which TSO resources could actually be freed up. Based on our analysis of TSA’s data for the 6-month period, where we considered these other relevant factors, we determined that TSOs spent, on average, less than 1 percent of their time—about 1 minute per day over the 6-month period— searching for the approximately 1.8 million sharp objects, other than knives and box cutters, that were found at passenger screening checkpoints between April 2005 and September 2005. If the average amount of time TSOs spent searching for sharp objects per day over a 6-month period was less than 1 minute per TSO, and sharp objects constituted just 19 percent of all prohibited items confiscated at checkpoints over this period, then it may not be accurate to assume that no longer requiring TSOs to search for small scissors and tools would significantly contribute to TSA’s efforts to free up TSO resources that could be used to implement other security measures. To further support its assertion that significant TSO resources would be freed up as a result of removing small scissors and tools from the list of prohibited items, TSA officials cited the results of an informal study conducted in October 2005—which was intended to provide a general idea of the types of prohibited items TSOs were finding as a result of their searches and how long various types of searches were taking TSOs to conduct. Specifically, according to the study conducted at 9 airports over a 14-day period, TSA determined that 24 percent of items found during carry-on bag searches were scissors. However, based on data regarding the number of bags searched, removing scissors may not significantly contribute to TSA’s efforts to free up TSO resources. TSA conducted additional informal studies 30, 60, and 90 days after the prohibited items list change went into effect to determine whether the change had resulted in reductions in the percentage of carry-on bags that were searched and overall screening time. However, we identified limitations in TSA’s methodology for conducting these studies. In February 2007, a TSA official stated that some FSDs interviewed several TSOs after the prohibited items list change went into effect, and these TSOs reported that the change did save screening time. However, TSA could not identify how many TSOs were interviewed, at which airports the TSOs were located, and how the TSOs were selected for the interview; nor did TSA document the results of these interviews. TSA also did not use random selection or representative sampling when determining which TSOs should be interviewed. Therefore, the interview results cannot be generalized. TSA officials acknowledged that they could have made some improvements in the various analyses they conducted on the prohibited items list change. However, they stated that they had to make SOP modifications quickly as part of the agency’s efforts to focus on higher threats, such as explosives, and the TSA Assistant Secretary’s goal of implementing the SOP modifications before the 2005 Thanksgiving holiday travel season. Additionally, officials stated that they continue to view their decision to remove small scissors and tools from the prohibited items list as sound, particularly because they believe small scissors and tools do not pose a significant threat to aviation security. TSA officials also stated that they believe the prohibited items list change would free up resources based on various sources of information, including the professional judgment of TSA airport staff, and their analysis of PMIS data on prohibited items confiscated at checkpoints. The TSA Assistant Secretary told us that even if TSA determined that the proposed SOP modifications would not free up existing TSO resources to conduct explosives detection procedures, he would have implemented the modifications anyway considering the added security benefit of the explosives detection procedures. Additionally, a TSA headquarters official responsible for airport security operations stated that to help strengthen the agency’s analysis of future proposed SOP changes, the agency plans to provide the Explosives Detection Improvement Task Force with the necessary resources to help improve its data collection and analysis. An additional measure intended to free up TSO resources involved changes to CAPPS rules. TSA’s assumption is that these changes could allow TSOs who were normally assigned to selectee screening duties to be reassigned to new procedures, such as USP, which may require new screening positions. (Both USP and SPOT require TSO positions: USP requires one screening position for every two screening lanes, while SPOT typically uses more than one screening position per ticket checker at the checkpoint.) According to FSDs we interviewed, the changes made to the prohibited items list and the CAPPS rules had not freed up existing TSO resources, as intended. Specifically, as of August 2006, 13 of 19 FSDs we interviewed at airports that tested USP or SPOT said that TSO resources were not freed up as a result of these changes. In addition, 9 of the 19 FSDs said that in order to operationally test USP or SPOT, TSOs had to work overtime, switch from other functions (such as checked baggage screening), or a screening lane had to be closed. TSA’s Explosives Detection Improvement Task Force reported that nearly all of the FSDs at airports participating in operational testing of USP believed that the procedure had security value, though the task force also reported that 1 FSD dropped out of the operational testing program for USP due to insufficient staffing resources and another could only implement the procedure during off-peak travel periods. Additionally, most of the FSDs we interviewed stated that the changes to the prohibited items list and CAPPS rules did not free up TSOs, as intended, to better enable TSOs to take required explosives detection training. Specifically, as of August 2006, of the 19 FSDs we interviewed at airports that implemented USP and SPOT, 13 said that they did not experience more time to conduct explosives training as a result of changes to the prohibited items list and CAPPS rules. Three of the 13 FSDs said that they used overtime to enable TSOs to take the explosives training. As previously stated, the TSA Assistant Secretary stated that even if existing TSO resources are not freed up to conduct explosives detection procedures, longer lines and wait times at airport checkpoints are an acceptable consequence, considering their added security benefit. With regard to explosives training, he stated that it is acceptable for FSDs to use overtime or other methods to ensure that all TSOs participated in the required explosives detection training. He further noted that even if one screening change does not free up TSO resources, all of the changes intended to accomplish this—when taken together—should ultimately help to redirect TSO resources to where they are most needed. TSA’s efforts to add data and metrics to its tool kit for evaluating the impact of screener changes are a good way to supplement the use of professional judgment and input from other experts and sources in making decisions about modifying screening procedures. However, TSA’s methods for data collection and analysis could be improved. We recognize the challenges TSA faces in evaluating the effectiveness of proposed procedures, particularly when faced with time pressures to implement procedures. However, by attempting to evaluate the potential impact of screening changes on security and resource availability, TSA could help support its decision making on how best to allocate limited TSO resources and ensure that the ability to detect explosives and other high-threat objects during the passenger screening process is enhanced. While we were able to assess TSA’s reasoning behind certain proposed SOP modifications considered during our review period, our analysis was limited because TSA did not maintain complete documentation of proposed SOP modifications. Documentation of the reasoning behind decisions to implement or reject proposed modifications was maintained in various formats, including spreadsheets developed by TSA officials, internal electronic mail discussions among TSA officials, internal memorandums, briefing slides, and reports generated based on the results of operational testing. TSA did improve its documentation of the proposed SOP modifications that were considered during the latter part of our 9-month review period. Specifically, the documentation for the SOP modifications proposed under the Explosives Detection Improvement Task Force provided more details regarding the basis of the proposed modifications and the reasoning behind decisions to implement or reject the proposed modifications. Of the 92 proposed SOP modifications considered during our 9-month review period that TSA documented, TSA provided the basis for 72. More specifically, TSA documented the basis—that is, the information, experience, or event that encouraged TSA officials to propose an SOP modification—for 35 of the 48 that were implemented and for 37 of the 44 that were not implemented. However, TSA only documented the reasoning behind TSA senior leadership’s decisions to implement or not implement proposed SOP modifications for 43 of 92 proposed modifications. According to TSA officials, documentation that explains the basis for recommending proposed modifications can also be used to explain TSA’s reasoning behind its decisions to implement proposed modifications. However, the basis on which an SOP modification was proposed cannot always be used to explain TSA senior leadership’s decisions not to implement a proposed modification. In these cases, additional documentation would be needed to understand TSA’s decision making. However, TSA only documented the reasoning behind its decisions for about half (26 of 44) of the proposed modifications that were not implemented. TSA officials told us that they did not intend to document all SOP modifications that were proposed during our review period. Officials stated that, in some cases, the reasoning behind TSA’s decision to implement or not implement a proposed SOP modification is obvious and documentation is not needed. TSA officials acknowledged that it is beneficial to maintain documentation on the reasoning behind decisions to implement or reject proposed SOP modifications deemed significant, particularly given the organizational restructuring and staff turnover within TSA. However, TSA officials could not identify which of the 92 proposed SOP modifications they consider to be significant because they do not categorize proposed modifications in this way. Our standards for governmental internal controls and associated guidance suggest that agencies should document key decisions in a way that is complete and accurate, and that allows decisions to be traced from initiation, through processing, to after completion. These standards further state that documentation of key decisions should be readily available for review. Without documenting this type of information, TSA cannot always justify significant modifications to passenger checkpoint screening procedures to internal or external stakeholders, including Congress and the traveling public. In addition, considering the ongoing personnel changes, without sufficient documentation, future decision makers in TSA may not know on what basis the agency historically made decisions to develop new or revise existing screening procedures. Following our 9-month review period, TSA continued to make efforts to improve documentation of agency decision making, as evidenced by decisions regarding the August 2006 and September 2006 SOP modifications related to the screening of liquids and gels. For example, TSA senior leadership evaluated the actions taken by the agency between August 7 and August 13, 2006, in response to the alleged liquid explosives terrorist plot, in order to identify lessons learned and improve the agency’s reaction to future security incidents. As a result of this evaluation, as shown in table 4, TSA made several observations and recommendations for improving documentation of agency decision making when considering modifications to screening procedures. Documentation of TSA’s decisions regarding the September 26, 2006, modifications to the liquid screening procedures showed that TSA had begun implementing the recommendations in table 4. TSA’s documentation identified the various proposed liquid screening procedures that were considered by TSA, the benefits and drawbacks of each proposal, and the rationale behind TSA’s final decision regarding which proposal to implement. The documentation also tracked the timing of TSA’s deliberations of each of the proposed liquid screening procedures. However, the documentation of TSA’s decisions was not always presented in a standard format, nor was the origin and use of supporting documentation always identified. TSA officials acknowledged that documentation of the September 2006 SOP modifications could have been improved and stated that efforts to improve documentation, through implementation of the recommendations in table 4, will continue to be a high priority. TSA implemented a performance accountability system in part to strengthen its monitoring of TSO compliance with passenger checkpoint screening SOPs. Specifically, in April 2006, TSA implemented the Performance Accountability and Standards System (PASS) to assess the performance of all TSA employees, including TSOs. According to TSA officials, PASS was developed in response to our 2003 report that recommended that TSA establish a performance management system that makes meaningful distinctions in employee performance, and in response to input from TSA airport staff on how to improve passenger and checked baggage screening measures. With regard to TSOs, PASS is not intended solely to measure TSO compliance with SOPs. Rather, PASS will be used by TSA to assess agency personnel at all levels on various competencies, including training and development, readiness for duty, management skills, and technical proficiency. There are three elements of the TSO technical proficiency component of PASS that are intended to measure TSO compliance with passenger checkpoint screening procedures: (1) quarterly observations conducted by FSD management staff of TSOs’ ability to perform particular screening functions in the operational environment, such as pat-down searches and use of the hand-held metal detector, to ensure they are complying with checkpoint screening SOPs; (2) quarterly quizzes given to TSOs to assess their knowledge of the SOPs; and (3) an annual, multipart knowledge and skills assessment. While the first two elements are newly developed, the third element—the knowledge and skills assessment—is part of the annual TSO recertification program that is required by the Aviation and Transportation Security Act (ATSA) and has been in place since October 2003. Collectively, these three elements of PASS are intended to provide a systematic method for monitoring whether TSOs are screening passengers and their carry-on items according to SOPs. TSA’s implementation of PASS is consistent with our internal control standards, which state that agencies should ensure that policies and procedures are applied properly. The first component of PASS (quarterly observations) is conducted by screening supervisors or screening managers, using a standard checklist developed by TSA headquarters, with input from TSA airport staff. There is one checklist used for each screening function, and TSOs are evaluated on one screening function per quarter. For example, the hand-held metal detector skills observation checklist includes 37 tasks to be observed, such as whether the TSO conducted a pat-down search to resolve any suspect areas. The second component of PASS (quarterly quizzes) consists of multiple-choice questions on the standard operating procedures. For example, one of the questions on the PASS quiz is “What is the correct place to start an HHMD outline on an individual: (a) top of the head, (b) top of the feet, or (c) top of the shoulder?” The third component of PASS is the annual knowledge and skills assessment, a component of the annual recertification program that evaluates the technical proficiency of TSOs. This assessment is composed of three modules: (1) knowledge of standard operating procedures, (2) recognition of threat objects on an X-ray image, and (3) demonstration of screening functions. According to TSA officials, while recertification testing is not a direct measure of operational compliance with passenger checkpoint screening SOPs, recertification testing, particularly module 1 and module 3, is an indicator of whether TSOs are capable of complying with SOPs. TSA officials stated that if a TSO does not have knowledge of SOPs and if the TSO cannot demonstrate basic screening functions as outlined in the SOPs, then the TSO will likely not be able to comply with SOPs when performing in the operating environment. Table 5 provides a summary of each of these modules. FSDs we interviewed reported that they have faced resource challenges in implementing PASS. Specifically, as of July 2006, 9 of 24 FSDs we interviewed said they experienced difficulties in implementing PASS due to lack of available staff to conduct the compliance-related evaluations. TSA officials stated that they have automated many of the data-entry functions of PASS to relieve the field of the burden of manually entering this information into the PASS online system. For example, all scores related to the quarterly quiz and skill observation components are automatically uploaded, and PASS is linked to TSA’s online learning center database to eliminate the need to manually enter TSOs’ learning history. In addition, the TSA Assistant Secretary said that FSDs were given the option of delaying implementation of PASS if they were experiencing resource challenges. TSA also conducts local and national covert tests, which are used to evaluate, in part, the extent to which noncompliance with the SOPs affects TSOs’ ability to detect simulated threat items hidden in accessible property or concealed on a person. TSA first issued guidance on its local covert testing program—known as Screener Training Exercises and Assessments (STEA)—in February 2004. STEA testing is conducted by FSD staff at airports, who determine the frequency at which STEA tests are conducted as well as which type of STEA tests are conducted. According to the STEA results reported by TSA between March 2004 and February 2006, TSOs’ noncompliance with the SOP accounted for some of the STEA test failures. TSOs’ lack of proficiency in skills or procedures, which may affect TSOs’ ability to comply with procedures, was also cited as the reason for some of the STEA test failures. TSOs who fail STEA tests are required to take remedial training to help them address the reasons for their failure. FSDs we interviewed reported that they have faced resource challenges in conducting STEA tests. Specifically, even though all 24 FSDs we interviewed as of July 2006 said that they have conducted STEA tests, 10 of these FSDs said that the lack of available staff made it difficult to conduct these tests. When asked how they planned to address FSDs’ concerns regarding a lack of available staff to complete STEA tests, TSA headquarters officials told us that they are considering resource alternatives for implementing the STEA program, but could not provide us with the specific details of these plans. Until the resource limitations that have restricted TSA’s use of its compliance monitoring tools have been fully addressed, TSA may not have assurance that TSOs are screening passengers according to the SOP. As previously discussed, TSA’s Office of Inspection initiated its national covert testing program in September 2002. National covert tests are conducted by TSA headquarters-based inspectors who carry simulated threat objects hidden in accessible property or concealed on their person through airport checkpoints, and in cases where TSOs fail to detect threat objects, the inspectors identify the reasons for failure. During September 2005, TSA implemented a revised covert testing program to focus more on catastrophic threats—threats that can bring down or destroy an aircraft. According to Office of Inspection officials, TSOs may fail to detect threat objects during covert testing for various reasons, including limitations in screening technology, lack of training, limitations in the procedures TSOs must follow to conduct passenger and bag searches, and TSOs’ noncompliance with screening checkpoint SOPs. Office of Inspection officials also said that one test could be failed due to multiple factors, and that it is difficult to determine the extent to which any one factor contributed to the failure. TSOs who fail national covert tests, like those who fail STEA tests, are also required to take remedial training to help them address the reasons for failure. The alleged August 2006 terrorist plot to detonate liquid explosives onboard multiple U.S.-bound aircraft highlighted the need for TSA to continuously reassess and revise, when deemed appropriate, existing passenger checkpoint screening procedures to address threats against the commercial aviation system. In doing so, TSA faces the challenge of securing the aviation system while facilitating the free movement of people. Passenger screening procedures are only one element that affects the effectiveness and efficiency of the passenger checkpoint screening system. Securing the passenger checkpoint screening system also involves the TSOs who are responsible for conducting the screening of airline passengers and their carry-on items, and the technology used to screen passengers and their carry-on items. We believe that TSA has implemented a reasonable approach to modifying passenger checkpoint screening procedures through its consideration of risk factors (threat and vulnerability information), day-to-day experience of TSA airport staff, and complaints and concerns raised by passengers and by making efforts to balance security, efficiency, and customer service. We are also encouraged by TSA’s efforts to conduct operational testing and use data and metrics to support its decisions to modify screening procedures. We acknowledge the difficulties in assessing the impact of proposed screening procedures, particularly with regard to the extent to which proposed procedures would deter terrorists from attempting to carry out an attack onboard a commercial aircraft. However, there are existing methods, such as covert testing and CBP’s COMPEX—a method that evaluates the effectiveness of CBP’s procedures for selecting international airline passengers for secondary screening—that could be used by TSA to assess whether proposed screening procedures enhance detection capability. It is also important for TSA to fully assess available data to determine the extent to which TSO resources would be freed up to perform higher-priority procedures, when this is the intended effect. Without collecting the necessary data or conducting the necessary analysis that would enable the agency to assess whether proposed SOP modifications would have the intended effect, it may be difficult for TSA to determine how best to improve TSOs’ ability to detect explosives and other high-threat items and to allocate limited TSO resources. With such data and analysis, TSA would be in a better position to justify its SOP modifications and to have a better understanding of how the changes affect TSO resources. Additionally, because TSA did not always document the basis on which SOP modifications were proposed or the reasoning behind decisions to implement or not implement proposed modifications, TSA may not be able to justify SOP modifications to Congress and the traveling public. While we are encouraged that TSA’s documentation of its decisions regarding the SOP modifications made in response to the alleged August 2006 liquid explosives terrorist plot was improved compared to earlier documentation, it is important for TSA to continue to work to strengthen its documentation efforts. Such improvements would enable TSA officials responsible for making SOP decisions in the future to understand how significant SOP decisions were made historically— a particular concern considering the restructuring and staff turnover experienced by TSA. As shown by TSA’s covert testing results, the effectiveness of passenger checkpoint screening relies, in part, on TSOs’ compliance with screening procedures. We are, therefore, encouraged by TSA’s efforts to strengthen its monitoring of TSO compliance with passenger screening procedures. We believe that TSA has implemented a reasonable process for monitoring TSO compliance and that this effort should assist TSA in providing reasonable assurance that TSOs are screening passengers and their carry-on items according to screening procedures. Given the resource challenges FSDs identified in implementing the various methods for monitoring TSO compliance, it will be important for TSA to take steps, such as automating PASS data entry functions, to address such challenges. To help strengthen TSA’s evaluation of proposed modifications to passenger checkpoint screening SOPs and TSA’s ability to justify its decisions to implement or not implement proposed SOP modifications, in the March 2007 report that contained sensitive security information, we recommended that the Secretary of Homeland Security direct the Assistant Secretary of Homeland Security for TSA to take the following two actions: when operationally testing proposed SOP modifications, develop sound evaluation methods, when possible, that can be used to assist TSA in determining whether proposed procedures would achieve their intended result, such as enhancing TSA’s ability to detect prohibited items and suspicious persons and freeing up existing TSO resources that could be used to implement proposed procedures, and for future proposed SOP modifications that TSA senior leadership determines are significant, generate and maintain documentation to include, at minimum, the source, intended purpose, and reasoning behind decisions to implement or not implement proposed modifications. On March 6, 2007, we received written comments on the draft report, which are reproduced in full in appendix III. DHS generally concurred with our recommendations and outlined actions TSA plans to take to implement the recommendations. DHS stated that it appreciates GAO’s conclusion that TSA has implemented a reasonable approach to modifying passenger checkpoint screening procedures through its assessment of risk factors, the expertise of TSA employees, and input from the traveling public and other stakeholders, as well as TSA’s efforts to balance security, operational efficiency, and customer service while evaluating proposed changes. With regard to our recommendation to develop sound evaluation methods, when possible, to help determine whether proposed SOP modifications would achieve their intended result, DHS stated that TSA plans to make better use of generally accepted research design principles and techniques when operationally testing proposed SOP modifications. For example, TSA will consider using random selection, representative sampling, and control groups in order to isolate the impact of proposed SOP modifications from the impact of other variables. DHS also stated that TSA’s Office of Security Operations is working with subject matter experts to ensure that operational tests are well designed and executed, and produce results that are scientifically valid and reliable. As discussed in this report, employing sound evaluation methods for operationally testing proposed SOP modifications will enable TSA to have better assurance that new passenger checkpoint screening procedures will achieve their intended purpose, which may include improved allocation of limited TSO resources and enhancing detection of explosives and other high-threat objects during the passenger screening process. However, DHS stated, and we agree, that the need to make immediate SOP modifications in response to imminent terrorist threats may preclude operational testing of some proposed modifications. Concerning our recommendation regarding improved documentation of proposed SOP modifications, DHS stated that TSA intends to document the source, intent, and reasoning behind decisions to implement or reject proposed SOP modifications that TSA senior leadership deems significant. Documenting this type of information will enable TSA to justify significant modifications to passenger checkpoint screening procedures to internal and external stakeholders, including Congress and the traveling public. In addition, considering the ongoing personnel changes TSA has experienced, such documentation should enable future decision makers in TSA to understand on what basis the agency historically made decisions to develop new or revise existing screening procedures. In addition to commenting on our recommendations, DHS provided comments on some of our findings, which we considered and incorporated in the report where appropriate. One of DHS’s comments pertained to TSA’s evaluation of the prohibited items list change. Specifically, while TSA agrees that the agency could have conducted a more methodologically sound evaluation of the impact of the prohibited items list change, TSA disagrees with our assessment that the prohibited items list change may not have significantly contributed to TSA’s efforts to free up TSO resources to focus on detection of high-threat items, such as explosives. As we identified in this report, based on interviews with FSDs, airport visits to determine the types of items confiscated at checkpoints, and a study to determine the amount of time taken to conduct bag searches and the number of sharp objects collected as a result of these searches, TSA concluded that the prohibited items list change would free up TSO resources. DHS also stated that interviews with TSOs following the prohibited items list change confirmed that the change had freed up TSO resources. However, based on our analysis of the data TSA collected both prior to and following the prohibited items list change, we continue to believe that TSA did not conduct the necessary analysis to determine the extent to which the removal of small scissors and tools from the prohibited items list would free up TSA resources. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 21 days from the date of this report. At that time, we will send copies of the report to the Secretary of the Department of Homeland Security, the TSA Assistant Secretary, and interested congressional committees as appropriate. We will also make copies available to others on request. If you or your staff have any questions about this report, please contact me at (202) 512-3404 or berrickc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix IV. To assess the Transportation Security Administration’s (TSA) process for modifying passenger checkpoint screening procedures and how TSA monitors compliance with these procedures, we addressed the following questions: (1) How and on what basis did TSA modify passenger screening procedures and what factors guided the decisions to do so? (2) How does TSA determine whether TSOs are complying with the standard procedures for screening passengers and their carry-on items? To address how TSA modified passenger screening procedures and what factors guided the decisions to do so, we obtained and analyzed documentation of proposed standard operating procedures (SOP) changes considered between April 2005 and September 2005, as well as threat assessments and operational studies that supported SOP modifications. The documentation included a list of proposed changes considered, as well as the source, the intended purpose, and in some cases the basis for recommending the SOP modification—that is, the information, experience, or event that encouraged TSA officials to propose the modifications—and the reasoning behind decisions to implement or reject proposed SOP modifications. We also obtained documentation of the proposed SOP changes considered by TSA’s Explosives Detection Improvement Task Force, which was the deliberating body for proposed changes that were considered between October 2005 and December 2005. We also reviewed and analyzed similar documentation for proposed SOP modifications considered between August 2006 and November 2006 in response to the alleged terrorist plot to detonate liquid explosives onboard multiple aircraft en route from the United Kingdom to the United States. We included modifications to passenger checkpoint screening procedures related to this particular event because they provided the most recent information available of TSA’s approach to modifying screening procedures in response to an immediate perceived threat to civil aviation. The documentation included notes from internal meetings, slides for internal and external briefings on proposed SOP modifications, data on customer complaints and screening efficiency, and the results of liquid explosives testing conducted by the Department of Homeland Security (DHS) Science and Technology Directorate and the Federal Bureau of Investigation (FBI). We also obtained each revision of the passenger checkpoint screening SOP that was generated between April 2005 and December 2005 and August 2006 and November 2006, as well as accompanying documentation that highlighted all of the changes made in each revision. In addition, we met with TSA headquarters officials who were involved in the process for determining whether proposed passenger checkpoint screening procedures should be implemented. We also met with officials in the DHS Science and Technology Directorate as well as the FBI to discuss the methodology and results of their liquid explosives tests, which were used to support TSA’s decisions to modify the SOP in September 2006. We also met with TSA Office of Inspection and DHS Office of Inspector General staff to discuss their covert testing at passenger checkpoints and the recommended changes to the passenger checkpoint screening SOP that were generated based on testing results. We also obtained and analyzed data and information collected by TSA on the proposed procedures that were evaluated in the operational environment. In addition, we met or conducted phone interviews with Federal Security Directors (FSD) and their management staff, including Assistant FSDs and Screening Managers, and Transportation Security Officers (TSO) with passenger screening responsibilities, at 25 commercial airports to gain their perspectives on TSA’s approach to revising the passenger checkpoint screening SOP. We also met with officials from four aviation associations—the American Association of Airport Executives, Airports Council International, the Air Transport Association, and the Regional Airline Association—to gain their perspectives on this objective. Finally, we met with five aviation security experts to obtain their views on methods for assessing the impact of proposed passenger checkpoint screening procedures. We selected these experts based on their depth of experience in the field of aviation security, employment history, and their recognition in the aviation security community. However, the views of these experts may not necessarily represent the general view of other experts in the field of aviation security. We compared TSA’s approach to revising its passenger checkpoint screening SOP with the Comptroller General’s standards for internal control in the federal government and risk management guidance. To address how TSA determines whether TSOs are complying with the standard procedures for screening passengers and their carry-on items, we obtained documentation of compliance-related initiatives, including guidance, checklists, and SOP quizzes used to assess TSO compliance under the Performance Accountability and Standards System (PASS), and guidance provided to FSDs for developing local compliance audit programs. We also obtained the fiscal year 2005 recertification and Screener Training Exercises and Assessments (STEA) test results, which were used, in part, to assess TSO compliance with and knowledge of the passenger checkpoint screening SOP. In addition, we reviewed the results of covert testing conducted by TSA’s Office of Inspection, which were also used, in part, to assess TSO compliance with the passenger checkpoint screening SOP. We assessed the reliability of the compliance-related data we received from TSA, and found the data to be sufficiently reliable for our purposes. In addition, we interviewed TSA headquarters officials who were responsible for overseeing efforts to monitor TSO compliance with standard operating procedures. This included officials in the Office of Security Operations, Office of Human Capital, and the Office of Operational Process and Technology. Our audit work also included visits to or phone conferences with 25 airports, where we interviewed FSDs, members of their management teams, and Transportation Security Officers with passenger screening responsibilities. However, the perspectives of these FSDs and their staff cannot be generalized across all airports. In July 2006, we submitted two sets of follow-up questions to FSD staff, related to their experiences with implementing PASS and STEA tests. We also obtained documentation of local compliance audit programs from the FSD staff at several of these airports. We compared TSA’s approach for monitoring TSO compliance with the Comptroller General’s standards for internal control in the federal government. As previously mentioned, we conducted site visits and/or phone interviews at 25 airports (8 category X airports, 7 category I airports, 4 category II airports, 4 category III airports, and 2 category IV airports) to discuss issues related to TSA’s approach to revising the passenger checkpoint screening SOP, and the agency’s approach to monitoring TSO compliance with the SOP. We visited 7 of these airports during the design phase of our study. These airports were selected based on variations in size and geographic location, and whether they were operationally testing any proposed passenger checkpoint screening procedures or passenger screening technology. We also selected 2 airports that participated in the Screening Partnership Program. After visiting the 7 airports during the design phase of our review, we selected an additional 15 airports to visit based on variations in size, geographic distribution, and performance on compliance-related assessments. Specifically, we obtained and analyzed fiscal year 2005 Screener Training Exercise and Assessments results and fiscal year 2005 recertification testing results to identify airports across a range of STEA and recertification scores. Additionally, we visited 3 additional airports that operationally tested the proposed Unpredictable Screening Process (USP) and the Screening Passengers by Observation Technique (SPOT) procedure. In July 2006, we received from 19 FSDs answers to follow-up questions on their experiences with implementing pilot testing of SPOT or USP. This included 14 FSDs that were not part of our initial rounds of interviews. Nine of these 14 FSDs were from airports that participated in SPOT pilots. The remaining 5 of 14 FSDs that were not part of our initial rounds of interviews were from airports that were participants in USP pilots. We conducted our work from March 2005 through March 2007 in accordance with generally accepted government auditing standards. Of the 92 proposed screening changes considered by TSA between April 2005 and December 2005, 63 were submitted by TSA field staff, including Federal Security Directors and Transportation Security Officers. Thirty proposed screening changes were submitted by TSA headquarters officials. Last, TSA senior leadership, such as the TSA Assistant Secretary, recommended 5 of the 92 proposed screening changes considered during this time period. One SOP modification was also proposed through a congressional inquiry. TSA’s solicitation of input from both field and headquarters officials regarding changes to the passenger checkpoint screening SOP was consistent with internal control standards, which suggest that there be mechanisms in place for employees to recommend improvements in operations. The FSDs with whom we met most frequently identified periodic conference calls with the Assistant Secretary, the SOP Question and Answer mailbox, or electronic mail to Security Operations officials as the mechanisms by which they recommended changes to the SOP. The TSOs with whom we met identified their chain of command and the SOP Question and Answer mailbox as the primary mechanisms by which they submitted suggestions for new or revised procedures. According to TSA officials, the SOP mailbox entails FSDs and their staff, including TSOs, submitting suggestions, questions, or comments to TSA’s Security Operations division via electronic mail, either directly or through their supervisors. Submissions are then compiled and reviewed by a single Security Operations official, who generates responses to the questions that have clear answers. However, for submissions for which the appropriate response is not obvious or for submissions that include a suggestion to revise the SOP, this official forwards the submissions to other Security Operations officials for further deliberation. SOP mailbox responses are provided to all TSA airport officials. If TSA headquarters revised a screening procedure based on a mailbox submission, the revision is noted in the mailbox response. Thirty of the screening changes considered by TSA between April 2005 and December 2005 were proposed by TSA headquarters officials, including Security Operations officials, who are responsible for overseeing implementation of checkpoint screening. According to Security Operations officials, they recommended changes to checkpoint screening procedures based on communications with TSA field officials and airport optimization reviews. Security Operations officials conduct optimization reviews to identify best practices and deficiencies in the checkpoint screening and checked baggage screening processes. As part of these reviews, Security Operations officials may also assess screening efficiency and whether TSOs are implementing screening procedures correctly. Other TSA headquarters divisions also suggested changes to passenger checkpoint screening procedures. For example, the Office of Law Enforcement recommended that there be an alternative screening procedure for law enforcement officials who are escorting prisoners or protectees. Previously, all armed law enforcement officers were required to sign a logbook at the screening checkpoint, prior to entering the sterile area of the airport. The officials in the Office of Passengers with Disabilities also recommended changes to checkpoint screening procedures. For example, in the interest of disabled passengers, they suggested that TSOs be required to refasten all wheelchair straps and buckles undone during the screening process. Last, TSA senior leadership suggested 5 of the 92 procedural changes considered by TSA between April 2005 and December 2005. TSA senior leadership also proposed a procedure that would allow TSOs to conduct the pat-down procedure on passengers of the opposite gender at airports with a disproportionate ratio of male and female TSOs. In addition to the person named above, Maria Strudwick, Assistant Director; David Alexander; Christopher W. Backley; Amy Bernstein; Kristy Brown; Yvette Gutierrez-Thomas; Katherine N. Haeberle; Robert D. Herring; Richard Hung; Christopher Jones, Stanley Kostyla; and Laina Poon made key contributions to this report. Aviation Security: TSA's Staffing Allocation Model Is Useful for Allocating Staff among Airports, but Its Assumptions Should Be Systematically Reassessed. GAO-07-299. Washington, D.C.: February 28, 2007 Aviation Security: Progress Made in Systematic Planning to Guide Key Investment Decisions, but More Work Remains. GAO-07-448T. Washington, D.C.: February 13, 2007. Homeland Security: Progress Has Been Made to Address the Vulnerabilities Exposed by 9/11, but Continued Federal Action Is Needed to Further Mitigate Security Risks. GAO-07-375. Washington, D.C.: January 24, 2007. Aviation Security: TSA Oversight of Checked Baggage Screening Procedures Could Be Strengthened GAO-06-869. Washington, D.C.: July 28, 2006. Aviation Security: TSA Has Strengthened Efforts to Plan for the Optimal Deployment of Checked Baggage Screening Systems, but Funding Uncertainties Remain GAO-06-875T. Washington, D.C.: June 29, 2006. Aviation Security: Management Challenges Remain for the Transportation Security Administration’s Secure Flight Program. GAO-06-864T. Washington, D.C.: June 14, 2006. Aviation Security: Further Study of Safety and Effectiveness and Better Management Controls Needed if Air Carriers Resume Interest in Deploying Less-than-Lethal Weapons. GAO-06-475. Washington, D.C.: May 26, 2006. Aviation Security: Enhancements Made in Passenger and Checked Baggage Screening, but Challenges Remain. GAO-06-371T. Washington, D.C.: April 4, 2006. Aviation Security: Transportation Security Administration Has Made Progress in Managing a Federal Security Workforce and Ensuring Security at U.S. Airports, but Challenges Remain. GAO-06-597T. Washington, D.C.: April 4, 2006. Aviation Security: Progress Made to Set Up Program Using Private- Sector Airport Screeners, but More Work Remains. GAO-06-166. Washington, D.C.: March 31, 2006. Aviation Security: Significant Management Challenges May Adversely Affect Implementation of the Transportation Security Administration’s Secure Flight Program. GAO-06-374T. Washington, D.C.: February 9, 2006. Aviation Security: Federal Air Marshal Service Could Benefit from Improved Planning and Controls. GAO-06-203. Washington, D.C.: November 28, 2005. Aviation Security: Federal Action Needed to Strengthen Domestic Air Cargo Security. GAO-06-76. Washington, D.C.: October 17, 2005. Transportation Security Administration: More Clarity on the Authority of Federal Security Directors Is Needed. GAO-05-935. Washington, D.C.: September 23, 2005. Aviation Security: Flight and Cabin Crew Member Security Training Strengthened, but Better Planning and Internal Controls Needed. GAO-05-781. Washington, D.C.: September 6, 2005. Aviation Security: Transportation Security Administration Did Not Fully Disclose Uses of Personal Information during Secure Flight Program Testing in Initial Privacy Notes, but Has Recently Taken Steps to More Fully Inform the Public. GAO-05-864R. Washington, D.C.: July 22, 2005. Aviation Security: Better Planning Needed to Optimize Deployment of Checked Baggage Screening Systems. GAO-05-896T. Washington, D.C.: July 13, 2005. Aviation Security: Screener Training and Performance Measurement Strengthened, but More Work Remains. GAO-05-457. Washington, D.C.: May 2, 2005. Aviation Security: Secure Flight Development and Testing Under Way, but Risks Should Be Managed as System Is Further Developed. GAO-05-356. Washington, D.C.: March 28, 2005. Aviation Security: Systematic Planning Needed to Optimize the Deployment of Checked Baggage Screening Systems. GAO-05-365. Washington, D.C.: March 15, 2005. Aviation Security: Measures for Testing the Effect of Using Commercial Data for the Secure Flight Program. GAO-05-324. Washington, D.C.: February 23, 2005. Transportation Security: Systematic Planning Needed to Optimize Resources. GAO-05-357T. Washington, D.C.: February 15, 2005. Aviation Security: Preliminary Observations on TSA’s Progress to Allow Airports to Use Private Passenger and Baggage Screening Services. GAO-05-126. Washington, D.C.: November 19, 2004. General Aviation Security: Increased Federal Oversight Is Needed, but Continued Partnership with the Private Sector Is Critical to Long-Term Success. GAO-05-144. Washington, D.C.: November 10, 2004. Aviation Security: Further Steps Needed to Strengthen the Security of Commercial Airport Perimeters and Access Controls. GAO-04-728. Washington, D.C.: June 4, 2004. Transportation Security Administration: High-Level Attention Needed to Strengthen Acquisition Function. GAO-04-544. Washington, D.C.: May 28, 2004. Aviation Security: Challenges in Using Biometric Technologies. GAO-04-785T. Washington, D.C.: May 19, 2004. Nonproliferation: Further Improvements Needed in U.S. Efforts to Counter Threats from Man-Portable Air Defense Systems. GAO-04-519. Washington, D.C.: May 13, 2004. Aviation Security: Private Screening Contractors Have Little Flexibility to Implement Innovative Approaches. GAO-04-505T. Washington, D.C.: April 22, 2004. Aviation Security: Improvement Still Needed in Federal Aviation Security Efforts. GAO-04-592T. Washington, D.C.: March 30, 2004. Aviation Security: Challenges Delay Implementation of Computer- Assisted Passenger Prescreening System. GAO-04-504T. Washington, D.C.: March 17, 2004. Aviation Security: Factors Could Limit the Effectiveness of the Transportation Security Administration’s Efforts to Secure Aerial Advertising Operations. GAO-04-499R. Washington, D.C.: March 5, 2004. Aviation Security: Computer-Assisted Passenger Prescreening System Faces Significant Implementation Challenges. GAO-04-385. Washington, D.C.: February 13, 2004. Aviation Security: Challenges Exist in Stabilizing and Enhancing Passenger and Baggage Screening Operations. GAO-04-440T. Washington, D.C.: February 12, 2004. The Department of Homeland Security Needs to Fully Adopt a Knowledge-based Approach to Its Counter-MANPADS Development Program. GAO-04-341R. Washington, D.C.: January 30, 2004. Aviation Security: Efforts to Measure Effectiveness and Strengthen Security Programs. GAO-04-285T. Washington, D.C.: November 20, 2003. Aviation Security: Federal Air Marshal Service Is Addressing Challenges of Its Expanded Mission and Workforce, but Additional Actions Needed. GAO-04-242. Washington, D.C.: November 19, 2003. Aviation Security: Efforts to Measure Effectiveness and Address Challenges. GAO-04-232T. Washington, D.C.: November 5, 2003. Airport Passenger Screening: Preliminary Observations on Progress Made and Challenges Remaining. GAO-03-1173. Washington, D.C.: September 24, 2003. Aviation Security: Progress since September 11, 2001, and the Challenges Ahead. GAO-03-1150T. Washington, D.C.: September 9, 2003. Transportation Security: Federal Action Needed to Enhance Security Efforts. GAO-03-1154T. Washington, D.C.: September 9, 2003. Transportation Security: Federal Action Needed to Help Address Security Challenges. GAO-03-843. Washington, D.C.: June 30, 2003. Federal Aviation Administration: Reauthorization Provides Opportunities to Address Key Agency Challenges. GAO-03-653T. Washington, D.C.: April 10, 2003. Transportation Security: Post-September 11th Initiatives and Long- Term Challenges. GAO-03-616T. Washington, D.C.: April 1, 2003. Airport Finance: Past Funding Levels May Not Be Sufficient to Cover Airports’ Planned Capital Development. GAO-03-497T. Washington, D.C.: February 25, 2003. Transportation Security Administration: Actions and Plans to Build a Results-Oriented Culture. GAO-03-190. Washington, D.C.: January 17, 2003. Aviation Safety: Undeclared Air Shipments of Dangerous Goods and DOT’s Enforcement Approach. GAO-03-22. Washington, D.C.: January 10, 2003. Aviation Security: Vulnerabilities and Potential Improvements for the Air Cargo System. GAO-03-344. Washington, D.C.: December 20, 2002. Aviation Security: Registered Traveler Program Policy and Implementation Issues. GAO-03-253. Washington, D.C.: November 22, 2002. Airport Finance: Using Airport Grant Funds for Security Projects Has Affected Some Development Projects. GAO-03-27. Washington, D.C.: October 15, 2002. Commercial Aviation: Financial Condition and Industry Responses Affect Competition. GAO-03-171T. Washington, D.C.: October 2, 2002. Aviation Security: Transportation Security Administration Faces Immediate and Long-Term Challenges. GAO-02-971T. Washington, D.C.: July 25, 2002. Aviation Security: Information Concerning the Arming of Commercial Pilots. GAO-02-822R. Washington, D.C.: June 28, 2002. Aviation Security: Vulnerabilities in, and Alternatives for, Preboard Screening Security Operations. GAO-01-1171T. Washington, D.C.: September 25, 2001. Aviation Security: Weaknesses in Airport Security and Options for Assigning Screening Responsibilities. GAO-01-1165T. Washington, D.C.: September 21, 2001. Homeland Security: A Framework for Addressing the Nation’s Efforts. GAO-01-1158T. Washington, D.C.: September 21, 2001. Aviation Security: Terrorist Acts Demonstrate Urgent Need to Improve Security at the Nation’s Airports. GAO-01-1162T. Washington, D.C.: September 20, 2001. Aviation Security: Terrorist Acts Illustrate Severe Weaknesses in Aviation Security. GAO-01-1166T. Washington, D.C.: September 20, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Transportation Security Administration's (TSA) most visible layer of commercial aviation security is the screening of airline passengers at airport checkpoints, where travelers and their carry-on items are screened for explosives and other dangerous items by transportation security officers (TSO). Several revisions made to checkpoint screening procedures have been scrutinized and questioned by the traveling public and Congress in recent years. For this review, GAO evaluated (1) TSA's decisions to modify passenger screening procedures between April 2005 and December 2005 and in response to the alleged August 2006 liquid explosives terrorist plot, and (2) how TSA monitored TSO compliance with passenger screening procedures. To conduct this work, GAO reviewed TSA documents, interviewed TSA officials and aviation security experts, and visited 25 airports of varying sizes and locations. Between April 2005 and December 2005, proposed modifications to passenger checkpoint screening standard operating procedures (SOP) were made for a variety of reasons, and while a majority of the proposed modifications--48 of 92--were ultimately implemented at airports, TSA's methods for evaluating and documenting them could be improved. SOP modifications were proposed based on the professional judgment of TSA senior-level officials and program-level staff. TSA considered the daily experiences of airport staff, complaints and concerns raised by the traveling public, and analysis of risks to the aviation system when proposing SOP modifications. TSA also made efforts to balance the impact on security, efficiency, and customer service when deciding which proposed modifications to implement, as in the case of the SOP changes made in response to the alleged August 2006 liquid explosives terrorist plot. In some cases, TSA tested proposed modifications at selected airports to help determine whether the changes would achieve their intended purpose. However, TSA's data collection and analyses could be improved to help TSA determine whether proposed procedures that are operationally tested would achieve their intended purpose. For example, TSA officials decided to allow passengers to carry small scissors and tools onto aircraft based on their review of threat information, which indicated that these items do not pose a high risk to the aviation system. However, TSA did not conduct the necessary analysis of data it collected to assess whether this screening change would free up TSOs to focus on screening for high-risk threats, as intended. TSA officials acknowledged the importance of evaluating whether proposed screening procedures would achieve their intended purpose, but cited difficulties in doing so, including time pressures to implement needed security measures quickly. Finally, TSA's documentation on proposed modifications to screening procedures was not complete. TSA documented the basis--that is, the information, experience, or event that encouraged TSA officials to propose the modifications--for 72 of the 92 proposed modifications. In addition, TSA documented the reasoning behind its decisions for half (26 of 44) of the proposed modifications that were not implemented. Without more complete documentation, TSA may not be able to justify key modifications to passenger screening procedures to Congress and the traveling public. TSA monitors TSO compliance with passenger checkpoint screening procedures through its performance accountability and standards system and through covert testing. Compliance assessments include quarterly observations of TSOs' ability to perform particular screening functions in the operating environment, quarterly quizzes to assess TSOs' knowledge of procedures, and an annual knowledge and skills assessment. TSA uses covert tests to evaluate, in part, the extent to which TSOs' noncompliance with procedures affects their ability to detect simulated threat items hidden in accessible property or concealed on a person. TSA airport officials have experienced resource challenges in implementing these compliance monitoring methods. TSA headquarters officials stated that they are taking steps to address these challenges. |
DOE and the private sector are involved in hundreds of cost-shared projects aimed at developing a broad spectrum of cost-effective, energy-efficiency technologies that protect the environment; support the nation’s economic competitiveness; and promote the increased use of oil, gas, coal, nuclear, and renewable energy resources. Universities and national laboratories also participate in many of these government-industry collaborations. Most of the projects that involve technology development beyond basic research are funded under cost-shared contracts, cooperative agreements, and cooperative research and development agreements (CRADAs). The offices in our review are funding more than 500 projects under contracts and cooperative agreements with industry that are expected to cost more than $15 billion by the time they are completed. DOE plans to fund about $8 billion and industry the balance. The four programs that require repayment cover about 60 projects. The other programs cover more than 450 projects. Although DOE participates with the private sector in many cost-shared technology development programs, only four require repayment of the federal investment if the technology is ultimately commercialized. The mechanisms used for repayment are similar in that they generally require a portion of royalties and fees from licensing technologies and revenues from commercial sales. Also, three programs provide for up to a 20-year repayment period and two allow flexibility on when repayment begins. A major difference in the programs is that one program provides for up to 150-percent repayment, while the other programs limit repayment to 100 percent. The Clean Coal Technology Program is a partnership between the federal government and industry for sharing the costs of commercial-scale projects that demonstrate innovative technologies for using coal in a more environmentally sound, efficient, and economical manner. DOE is investing more than $2.2 billion in this program through the year 2003. The funds have been committed under cooperative agreements to more than 40 active and completed projects that were selected in five separate rounds of nationwide competitions for project proposals conducted from 1986 to 1993. DOE funds up to 50 percent of a project’s cost, and the nonfederal participants fund the balance. Most of the projects are currently in the design, construction, or operation phases. In 1985, when the program began, DOE made a programmatic decision in consultation with industry and the Congress to require the participants in the clean coal projects to repay the federal investment in projects within 20 years after a project ends if the technology is commercialized. For projects selected in the first round of competition, repayment was to come from (1) any net revenues generated from continued project operations and (2) revenues accruing from the commercial sale, lease, manufacture, licensing, or use of the technology. During rounds two and three, DOE changed the repayment provisions to respond to the industry’s concerns and lessen the likelihood that the repayment requirements could hamper the project participants’ competitiveness. Among other things, DOE (1) excluded net operating revenues as a required source of repayment, (2) reduced the percentage of revenues from technology sales that are subject to repayment, (3) excluded foreign sales from repayment, (4) eliminated an inflation adjustment requirement, (5) allowed a grace period before repayment begins to facilitate the technology’s initial market penetration, and (6) provided for a waiver from repayment altogether if repayment would place the participants at a competitive disadvantage in the marketplace. According to DOE officials, three clean coal projects with a federal investment of about $36.2 million have progressed to the repayment phase. As of March 1996, DOE had received payments totaling about $377,000 for these projects. Under the Metals Initiative Program, DOE shares in the cost of research and development projects intended to increase the energy efficiency and enhance the competitiveness of the domestic steel, aluminum, and copper industries. The projects are carried out under cooperative agreements. Industry is required to provide at least 30 percent of the funding, and DOE provides the balance. Industry participants establish a holding company for each project for the purpose of holding patents, licensing technology, tracking technology sales and use, and collecting and distributing licensing fees and other income. Appropriations laws require repayment of the total federal investment up to one and one-half times (150 percent) from the proceeds of the commercial sale, lease, manufacture, or use of technologies developed under the program. The Metals Initiative Program is the only program that requires repayment that exceeds DOE’s investment. According to DOE, repayment applies to all sales—domestic or foreign. As of September 1995, DOE had spent or obligated about $89 million for projects under this program. Although some patent applications have been filed and some licensing agreements have been negotiated, none of the projects have begun repayment yet, according to DOE officials. In early 1991, Chrysler, Ford, and General Motors established the United States Advanced Battery Consortium to jointly sponsor research and testing to develop advanced batteries for electric vehicles. Later that year, DOE and representatives of the utility industry agreed to work together with the consortium under a cost-sharing arrangement. DOE is providing 50 percent of the funding, and the other 50 percent is being provided by the participating automobile companies, utilities, and battery developers. According to DOE, current plans call for federal contributions amounting to about $103 million for funding this research through 1996. DOE expects to approve additional funding for the continuation of the research after the consortium submits a proposal identifying its funding needs. As discussed in our August 1995 report, DOE is entitled to repayment of its financial contributions to the consortium if the advanced batteries are commercialized. Repayment is recommended in a Senate appropriations report. Under the terms of the cooperative agreement between DOE and the consortium, DOE’s investment is to be repaid based on (1) the revenue received by the consortium or its battery developers from the licensing of patents to third-party domestic or foreign battery manufacturers and (2) any payments to the consortium or its contractors upon the liquidation or winding up of its business. In addition, one of the consortium’s battery development contracts provides for repayment to DOE based on revenues from the domestic or foreign sale of batteries by the developer. The repayment period ends after DOE’s total contribution has been repaid, or 20 years, whichever occurs first. The repayment obligation can be waived, in whole or in part, if DOE determines that repayment places the consortium or its battery developers at a competitive disadvantage. Three of the eight battery development contracts provide that repayment will not begin until battery sales by the developer and/or licensee reach a specified level. The reactor program focuses on making standardized advanced light water reactors available for orders during the 1990s to help meet the projected demand for new electrical generation capacity by 2010. DOE provides up to 50 percent of the funding for projects carried out with industry, and industry provides the balance. According to DOE, in 1986 when this program was begun, repayment was not considered because the main objective was to reduce the licensing and regulatory impediments that were contributing to extensive delays in the construction and permitting of nuclear power generating facilities. The objective evolved into a certification of advanced light water reactor designs to help restore the industry’s confidence and reduce the financial risks in acquiring new nuclear plants at the appropriate time in the future. The repayment provisions covering domestic or foreign sales have been incorporated into two programs that are part of the Advanced Light Water Reactor Program. In one of these programs—the advanced reactor design certification program—the Congress provided $14 million in additional funding for a specific contract, and an appropriations report recommended that this additional federal cost should be repaid from royalties on the first commercial sale of the reactor design. DOE will require repayment of this amount. DOE subsequently agreed to provide another $11 million in additional funding and may require that this amount be repaid, as well as any additional future funding provided under this contract. DOE’s original contractual commitment of about $50 million is not subject to repayment. According to DOE officials, the Department also may provide for the recovery of any federal contributions in excess of the original $50 million commitment under another contract in the advanced reactor design certification program. The other program—the “first-of-a-kind” engineering program—involves a cooperative agreement between DOE and the Advanced Reactor Corporation. According to DOE, in the development of this program, the participating electric generating utilities made a major commitment to provide cost-share funding and overall direction and technical advice to achieve a plant design that they would be willing to acquire at some future time. Because of their direct, substantial contributions to the plant designs, the utilities require reactor vendors to pay them royalties from the sale of the plant designs or technology to other customers. Since the utilities were going to require royalty payments, DOE decided to also require royalties proportionate to its share of the project’s total costs. The cooperative agreement requires that DOE be repaid up to its total investment from the revenues received by the Advanced Reactor Corporation from the sale or use of the plant designs or technology developed under this program. The repayment period runs up to 20 years, or until the federal investment, which is expected to total $100 million, is repaid. A repayment policy provides both advantages and disadvantages. The main advantage is the recovery of the federal investment. We believe that many of the disadvantages and arguments against repayment can be mitigated by structuring a flexible policy that provides criteria and factors to consider in determining the application of repayment to individual programs or projects. In 1991, DOE considered having a Department-wide policy to recover its investment in technology development projects and even developed a draft order with criteria and guidelines for determining when repayment is appropriate. But due to substantial opposition within the Department and the departure of the Deputy Secretary who was the primary supporter of this concept, the order was never implemented. The primary advantage of a repayment policy is that the government could recover some of its investment in the development of technologies. According to several DOE officials, a repayment requirement could also provide more assurance that the project proposals are sound and economically viable by discouraging proposals that are too marginal financially for their sponsors to commit to repayment. As previously mentioned, the DOE offices in our review are funding projects with industry that are expected to cost more than $15 billion by the time they are completed. DOE’s share of the planned funding is expected to total about $8 billion, and the nonfederal share about $7 billion, as shown in table 1. About $2.5 billion of the $8 billion is subject to repayment. Except for the projects within the four programs that already require repayment, it is important to note that, for a variety of reasons discussed later, not all of the projects contained in the table would lend themselves to repayment. In addition, unless follow-on projects are undertaken, requiring new or amended contracts or cooperative agreements, only new projects not yet negotiated with industry would be appropriate for repayment. While the potential repayment is difficult to quantify, DOE documents developed when the 1991 draft repayment policy statement was under consideration indicated that the potential is substantial. To illustrate the potential for repayment, we subtracted the approximately $2.5 billion in federal funding included in table 1 for projects already covered by repayment provisions from the approximately $8 billion total planned federal funding. The remaining cooperative agreements and contracts amount to about $5.5 billion. If one assumes that only 50 percent of this amount is dedicated to projects that would lend themselves to repayment, and that about 15 percent of research and development funds result in commercialized technologies (which DOE officials say is about average), then about $400 million could come back to the federal government in the form of repayment. In discussing technology development programs and projects with DOE’s Deputy Assistant Secretaries and other DOE officials, many of them agreed that certain types of projects might be appropriate candidates for repayment of the federal investment if the concept was employed at the beginning of the projects or new projects are undertaken in the future. The officials generally indicated that repayment should be more applicable to projects with a large federal investment where the federal contribution is easily identified, projects involving technologies that are close to commercialization, and projects in which the federal investment serves to reduce the costs and risks of providing the technology to potential users. The officials also said that technologies that have a large potential market and technologies that are likely to be commercialized in foreign countries are good candidates for requiring repayment of the federal investment. Some officials said that repayment should be directed at projects that have large, well-financed industry teams. DOE officials indicated, for example, that the Reservoir Class Field Demonstration Program might be appropriate for repayment if future projects are undertaken. This program shares costs for demonstrations of existing and new technologies for increasing production from U.S. oil fields that might otherwise be prematurely abandoned. The program operates on the premise that the characteristics of some oil formations are similar, and when small and major oil producers demonstrate technologies and processes that are successful in increasing production, other oil field operators may want to try them in their fields. Three rounds of demonstration projects have been undertaken, and more may be undertaken if funding becomes available. DOE has committed about $100 million to the 29 projects that are currently in the program. According to DOE, the projects may take from 3 to 7 years to complete. The Advanced Turbine Systems Program is another program that DOE officials said might be appropriate for repayment if new projects are begun or current projects are amended. This program is intended to develop more efficient, advanced turbine systems for both utility and industrial electric power generation. According to DOE, the program is expected to cost about $700 million by the time it is completed in the year 2000. Depending on appropriations, DOE is planning to fund about $450 million of the total estimated cost, and industry participants are expected to fund the balance. New cost-shared technology demonstration and commercial application programs authorized by the Energy Policy Act of 1992 would also be appropriate candidates for repayment if they are funded. In fact, the act requires DOE to establish procedures and criteria for the repayment of the federal investment in several authorized coal projects, but they have not been funded. Many of the DOE officials we spoke with generally indicated a willingness to consider repayment, but they said that flexibility should exist to be able to structure or waive repayment to meet programmatic needs. Some officials believed that repayment may not be suitable for grants, universities, and small businesses or for projects that are directed at basic research. Others indicated that repayment should be waived if the federal investment is considered disproportionately small in comparison with the potential costs of administering the repayment process. Some DOE officials said that a stronger argument can be made for repayment if the technology developed is likely to be commercialized outside of the United States. Appendix I provides a more detailed discussion of the types of projects that DOE officials believe would be the most appropriate or suitable for repaying the federal investment. DOE officials we spoke with and DOE’s 1991 draft document on repayment policy also pointed out several disadvantages to the government or industry participants that would need to be addressed. These disadvantages, along with potential ways to structure repayment so as to mitigate the disadvantages, are discussed below. According to DOE, most technologies funded by the Department require further development and/or funding to bring them to the marketplace after DOE’s participation is complete. Some DOE officials believe that repayment could lower industry’s rate of return on investment and discourage industry, especially small businesses, from commercializing such technologies. The officials also believe that repayment might discourage industry from participating in cost-shared technology development projects in technological areas that DOE wants to promote. In our October 1991 report, we recommended that DOE study the effect that repayment provisions have had on the industry’s participation in the Clean Coal Technology Program. DOE agreed to do this but has not completed its study. Although a repayment requirement might have some influence on the timing of commercialization or participation in technology development projects, industry participants would not have to repay the federal investment unless the technology is commercialized. Therefore, repayment should be more favorable to industry than other sources of funding, such as a bank loan, which would have to be repaid with interest regardless of whether the technology is commercialized. According to a former DOE Deputy Secretary who supported the expansion of repayment programs, businesses expect some form of repayment as a normal cost of doing business. DOE officials generally believe that repayment would create an administrative burden in negotiating, administering, auditing, and enforcing cost-sharing and repayment agreements. Both DOE and industry participants would need to establish a recordkeeping system for tracking the sales and use of technologies long after a project ends (up to 20 years in three of the programs that require repayment). According to DOE, the administrative and auditing costs may not make it worthwhile to pursue repayment. We believe one way of making the administrative burden less onerous and minimizing auditing requirements might be to require sample audits of industry participants’ records. Another approach might be to require repayment only in those instances in which the amount of the return justifies the cost of necessary audits and other internal control measures. DOE officials indicated that they are studying the issue of ensuring proper repayment in the Clean Coal Technology Program. Many DOE officials believe that obtaining increased cost-sharing by industry is preferable to requiring repayment of the federal investment. Some indicated that a repayment requirement could be used as a negotiating tool to obtain higher cost-sharing in lieu of repayment. The officials also argue that it may be better in terms of conserving federal resources to obtain an increased cost-share from all participants than to obtain repayment only from those successfully commercializing their technologies. According to DOE, any repayment provisions must consider the effect of repayment on the ability of the entity carrying out the project to compete in the marketplace (proceed with commercialization of the technology and achieve a rate of return commensurate with the industry and the risk). DOE believes that if repayment obligations are too demanding, especially in the early years of technology sales, cash flows and profitability may not be sufficient for the organization responsible for repayment to remain in business, or licensing fees and costs may be too high for the technology to remain competitive with alternative technologies. We believe one way of mitigating this concern could be to allow a grace period after a project ends before requiring repayment to begin, as was done in two of the programs discussed above that require repayment. A grace period could be based on a specified period of elapsed time or a specified number of technology units sold before repayment begins. Another issue is the disposition or use of the proceeds resulting from repayment. Many DOE officials indicated that any proceeds from repayment programs should flow back into the applicable program to leverage the federal funding that would be available for ongoing and future projects, rather than be deposited in the Treasury, which is the current practice. Under current policy, proceeds are available to either reduce the budget deficit or to be reallocated on the basis of national priorities. While we do not believe that cost recovery should be a major objective, opportunities may exist for substantial recovery of taxpayers’ dollars if DOE would adopt a policy to require repayment of its investment in successfully commercialized technologies. However, a repayment policy would need to be structured with enough flexibility so as not to interfere with program objectives or adversely affect industry’s participation in projects and technology commercialization. Such a policy should provide criteria and factors to consider in determining whether it should be applied to individual programs or projects. A properly structured policy could provide the flexibility needed to mitigate many of the arguments against having a policy. We recommend that the Secretary of Energy develop and implement a Department-wide policy for requiring repayment of the federal investment in successfully commercialized cost-shared technologies. The policy should provide criteria and flexibility for determining which programs and projects are appropriate for repayment. We provided a draft of this report to DOE for its review and comments. DOE said that it concurred with our conclusion that cost recovery should not be a major objective of a federal technology development program but pointed out that in its experience, there are individual projects and programs for which repayment provisions can work. DOE said that demonstration programs that are well advanced in the research and development pipeline are the most likely candidates for repayment. According to DOE, however, the real payback to the nation is in the societal benefits that flow out of federally funded research and development, including jobs, competitiveness in world markets for U.S. companies, and the resulting contributions to the U.S. economy of both domestic and export technology sales. We agree that these potential benefits are very important, but they are independent of the argument for recovering the taxpayers’ share of investment in successfully commercialized technologies. If repayment under appropriate circumstances was an ancillary requirement for successfully commercialized technologies, it would allow the government to potentially recover some of its investment in technologies as well as enjoy the other positive benefits that might accrue. In the case of environmental cleanup technologies, DOE said that the payback is in the form of cost avoidance to the government through the use of innovative technologies that reduce the cost of cleaning up the contaminated weapons complex. We recognized this major benefit in our draft report. However, we continue to believe that if such technologies have potential commercial application, new projects demonstrating the technologies should be considered for repayment of the federal investment. DOE said that it agreed with our recommendation that a repayment policy should provide the flexibility for determining which programs and projects are appropriate for repayment. DOE believes that the policy should also have flexibility in determining the repayment terms, and when and how they should be applied so as not to adversely affect the development or introduction of technologies into the marketplace. Appendix II contains the complete text of DOE’s comments, along with our responses. Our work was performed from August 1995 through April 1996 in accordance with generally accepted government auditing standards. Appendix III describes the scope and methodology of our review. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after the date of this letter. At that time, we will provide copies to the Secretary of Energy, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-3841 if you have any questions or need additional information. Major contributors to this report are listed in appendix IV. This appendix discusses the Department of Energy’s (DOE) cost-shared technology development programs administered under four major organizational areas—Fossil Energy, Energy Efficiency and Renewable Energy, Environmental Management, and Nuclear Energy. The appendix also summarizes the planned funding for technology development projects in each of the four areas and discusses the views of DOE officials on the types of programs and projects that would be the most appropriate or suitable for repayment of the federal investment. DOE’s fossil energy technology development programs support cost-shared projects with industry to foster the development and commercialization of coal, petroleum, and natural gas technologies. As shown in table I.1, DOE’s planned funding for coal and special technology projects accounts for the largest portion, by far, of the nearly $6.6 billion that DOE is planning to invest in active fossil energy projects. More than $2.2 billion is committed to projects in the Clean Coal Technology Program, which requires repayment if the technologies are commercialized. Other large DOE investments in coal and special technology projects involve programs that are developing fuel cells, advanced turbine systems, and advanced pulverized coal systems. DOE’s Reservoir Class Field Demonstration Program accounts for about 90 percent of the Department’s planned funding for cost-shared petroleum technology projects. This program demonstrates technologies and processes for increasing production from oil fields to prevent them from being prematurely abandoned. Natural gas technology projects focus on new and improved technologies for extracting, delivering, storing, and using natural gas. According to DOE officials in the fossil energy area, several fossil energy technology development programs may be appropriate candidates for repayment if new or amended projects are undertaken. Two of them—the Reservoir Class Field Demonstration Program and the Advanced Turbine Systems Program—have previously been discussed. According to the officials, the Fuel Cell Program might also be a possible candidate for repayment if DOE decides to help fund the costs and risks of providing fuel cell technology to potential users. DOE is planning to invest about $270 million through completion of active cooperative agreements to develop new, improved fuel cells for power generation. The officials indicated that the fuel cell industry is an infant industry, and the vision of the program is to enable the U.S. fuel cell industry to be strongly competitive in the international market after the year 2000. According to DOE officials, the Advanced Pulverized Coal Program could also be a candidate for repayment as additional federal investment is committed to new projects. Under one aspect of this program, separate teams of industry partners are developing a conceptual design for a 400-megawatt power plant based on pulverized coal-firing technology incorporating advanced boiler design and innovative pollution control systems. DOE will then select one of the teams to develop and produce a module to test and confirm the performance of that team’s technology concept, which will serve as a prototype unit. DOE estimates that the entire effort will cost about $85 million, with DOE funding about 65 percent of the costs and industry funding the balance. Regarding the natural gas projects, DOE officials said that the Gas-to-Liquids Conversion Program might be a likely future candidate for a repayment policy. The objectives of this program are to develop technologies for economic conversion of methane and other light hydrocarbon gases to liquids that can be used as clean-burning, alternative liquid transportation fuels or chemical feedstocks. DOE hopes that such technologies could one day make remote or low-quality gas supplies economical to produce and transport high-value liquids for use in petroleum and petrochemical markets. DOE’s Deputy Assistant Secretary for Gas and Petroleum Technologies told us that the potential for repayment of DOE’s cost-share would be a key consideration in future gas and petroleum technology development program activities. However, the official said that funds may not be available for cost-sharing additional rounds of projects under the Reservoir Class Field Demonstration Program. DOE’s energy efficiency and renewable energy cost-shared technology development programs support projects conducted jointly with industry to develop advanced technologies for use in the transportation, utility, industrial, and building sectors of the economy. These programs cover a broad spectrum of activities, ranging from research and development to demonstration and deployment. Table I.2 shows the planned funding for active projects in each sector. Transportation technology programs are directed at developing and demonstrating advanced electric and hybrid propulsion systems, advanced propulsion system materials and other new light-weight transportation materials, and advanced light- and heavy-duty heat engines. Projects support a wide range of activities, including the development of advanced batteries for powering electric vehicles, fuel cell propulsion systems, improved energy storage technologies, high-efficiency turbine engine technologies, improved automotive piston engine technologies, clean diesel engine technologies, and alternative fueled vehicles. Utility technology programs are directed at developing and demonstrating cost-effective and energy efficiency technologies for generating electric power from geothermal, solar thermal, biomass, photovoltaics, wind, hydroelectric, and other renewable resources. Projects are also directed at increasing the efficiency and reliability of energy storage and delivery systems. DOE supports a wide range of industrial-related projects in collaboration with the private sector to help industry develop and deploy advanced energy efficiency, renewable energy, and pollution-prevention technologies for industrial applications. The Department focuses on seven manufacturing industries that account for over 80 percent of the energy used and wastes produced by the manufacturing sector. These industries include aluminum, chemicals, forest products, glass, metalcasting, petroleum refining, and steel. According to an October 1995 DOE report,over 70 of the more than 350 industrial-related projects supported by DOE in the past 20 years have resulted in commercialized technologies. DOE also develops and promotes advanced, cost-effective, energy efficient, and renewable energy technologies for commercial and residential buildings, appliances, and building equipment. The building systems program involves research, development, and deployment activities that enable building owners and developers to capture significant energy savings opportunities by combining research on optimal systems designs with programs that deploy these energy efficiency strategies in the construction of new buildings and retrofit of existing buildings. According to DOE’s Deputy Assistant Secretary for Transportation Technologies, several projects administered by his office could have been candidates for repayment if the concept had been required at the beginning of the projects. He indicated, for example, that repayment may be appropriate in the hybrid vehicle development program where the federal investment is large and major companies are involved. He also identified some other examples involving projects to develop advanced materials, reduce manufacturing costs, or improve fuel economy. He pointed out that if technologies are relatively close to commercialization, or if the government is planning to undertake a program to reduce the costs and risks of deployment, it would be easier to support repayment with the private sector and make it work. He also indicated that repayment might be appropriate if follow-on development projects are undertaken for some technologies and the federal investment is easily identified. The Deputy Assistant Secretary for Utility Technologies said that the most appropriate candidates for repayment for projects that his office administers are those involving plant-scale operations, such as the Solar 2 plant, geothermal facilities, wind plants, and biomass gasifier plants. He indicated that the next most appropriate candidates would be projects that are developing stand-alone systems components, such as prototype generators, advanced wind turbines, and dish Sterling solar units. He said his third choice would be manufacturing assistance programs. The Deputy Assistant Secretary for Industrial Technologies said that most of the industrial technologies could be considered likely candidates for repayment. We were told that while many of the industrial projects involve large manufacturing companies, many highly specialized, smaller firms are also typically involved as partners in these projects. However, the Metals Initiative Program is the only program that requires repayment for projects that the Deputy Assistant Secretary’s office administers. As previously mentioned, repayment in that program is legislatively mandated. DOE’s environmental management technology development program provides new or improved methods for use in cleaning up DOE’s sites across the United States that have been contaminated from decades of weapons production activities. According to DOE, these methods either reduce risks to workers, the public, or the environment; reduce cleanup costs; or provide a problem solution that currently does not exist. Under this program, DOE and the private sector undertake cost- shared projects to demonstrate the capability of industry technologies and methods for cleaning up contamination at DOE sites. The projects generally involve development, validation, testing, and evaluation of the technologies and methods. If the technologies are proven successful, both DOE and industry benefit. Table I.3 shows the planned funding for active projects. According to DOE program officials, the Department does not require repayment of its investment in environmental management projects because most of the technologies or processes have already had significant expenditures by the private sector in the development phase before the industry partners entered into cooperative work with the government. DOE also expects significant savings under the environmental management technology development program through the use of the technologies or processes at cleanup sites. We were told, for example, that the dynamic underground stripping process removes petroleum from groundwater 40 times faster than conventional methods. According to DOE, using this improved process, which cost $13.8 million to develop, saved taxpayers $19 million in fiscal year 1994 at one cleanup site alone. DOE program officials agreed that some of the processes under development in their cost-shared projects may have potential commercial application. The officials also agreed that if the technologies or processes have commercial potential, they could have been candidates for repayment of the federal investment. But, the officials indicated that any such repayment would be small in comparison with the potential cost avoidance savings that are expected from using successfully demonstrated technologies or processes to cleanup DOE sites. DOE’s Office of Nuclear Energy administers the Advanced Light Water Reactor Program under cost-shared partnerships with industry. This program is intended to eliminate barriers to efficient and cost-effective operation of nuclear powerplants and maintain standards of safety in their design and operation. The program’s primary focus is to make standardized advanced reactors available in time to help meet projected future power generation needs. The planned funding for light water reactors is shown in table I.4 The overall program involves three major components: a design certification program for advanced reactors, a first-of-a-kind engineering program for advanced reactors, and a program to extend the life of aging commercial nuclear powerplants. Four cost-shared projects are being funded under separate contracts to design, test, and obtain Nuclear Regulatory Commission certification of advanced reactor designs. Two other projects are being funded under a cooperative agreement to develop the detailed engineering design of two advanced reactors in order to promote commercial standardization, produce reliable construction schedules and cost estimates, and facilitate construction preparations. Additional projects are developing technologies for assessing material degradation of systems and components at operating nuclear powerplants. As previously discussed, DOE may require repayment of any additional federal funds provided in excess of $50 million under two of the contracts in the design certification program. According to DOE, the contractors have agreed to this arrangement. DOE requires repayment of its total investment under the cooperative agreement in the first-of-a-kind engineering program. DOE officials said that they were also looking for opportunities for DOE to share in any patents that may be developed based on technologies developed under the commercial operating reactors program. The following are GAO’s comments on the Department of Energy’s letter dated May 24, 1996. 1. The issues raised in DOE’s letter are addressed in the agency comments section of our report. The issues in the enclosure to DOE’s letter are addressed below. 2. Our report points out that the costs of administering, auditing, and enforcing repayment agreements should be considered in determining whether to pursue repayment on specific projects. In fact, we suggested that DOE should only require repayment in those instances where the amount of the potential return justifies the cost of necessary audits and other internal control measures. We also pointed out that there may be ways to reduce the cost of such control measures, but it was beyond the scope of this review to design such measures. Once cost-effective control measures are developed, DOE could then address the related costs on a case-by-case basis in determining whether to apply repayment to specific projects. 3. Our hypothetical example of potential repayment if future projects are funded at the level planned for active projects is for illustrative purposes only. We included an assumption that half of the projects may not lend themselves to repayment. Projects in which the potential costs of obtaining repayment would exceed the potential benefits would fall in this category, along with projects that are too early in the technology development process to lend themselves to repayment. We disagree with DOE’s comment that our report does not sufficiently elaborate on the tradeoffs between up-front cost-sharing and downstream repayments if the technologies are commercialized. We pointed out that DOE generally prefers to have increased industry cost-sharing, and that some DOE officials believe that it may be better to obtain increased cost-sharing from all participants than to obtain repayment only from those that successfully commercialize their technologies. We believe that even with increased industry cost-sharing, however, an argument can be made that taxpayers have an interest in the repayment of taxpayers’ dollars when technologies developed with federal funds are successfully commercialized. See comment 2 for our response to DOE’s point that administrative costs should be considered in deciding whether to require repayment. To determine the extent to which the Department of Energy (DOE) requires repayment of its investment under cost-shared technology development and demonstration programs, including the similarities and differences in the mechanisms used for repayment, we interviewed DOE officials responsible for administering such programs; reviewed DOE reports and program documents, congressional budget requests, relevant legislation and congressional reports, and various private sector reports and publications that discuss the programs; and drew from our past reviews and reports on such programs. We also talked with several DOE attorneys, an official of DOE’s Office of Inspector General, and a former congressional subcommittee staff member who had been responsible for appropriations for many DOE technology development programs. To identify advantages and disadvantages of having or not having a repayment policy, we interviewed many DOE officials involved in administering cost-shared technology development and demonstration programs, including several Deputy Assistant Secretaries; DOE policy officials and attorneys; and a former Deputy Secretary of DOE and his former Executive Assistant. We also reviewed DOE reports and other documents that discussed the advantages and disadvantages of a repayment policy, including DOE files relating to a 1991 draft repayment policy that was never implemented. To obtain a perspective on DOE’s investment in technology development projects, we asked DOE to provide us with information on the estimated total federal and nonfederal funding planned for active cost-shared technology development projects funded under contracts and cooperative agreements. We focused on the major organizational areas of DOE that fund most of the Department’s cost-shared technology development projects involving contracts and cooperative agreements—Fossil Energy, Energy Efficiency and Renewable Energy, Environmental Management, and Nuclear Energy—and we asked DOE to exclude any projects involving grants and basic research. We used the DOE information in our discussions with DOE officials to obtain their views on the types of programs and projects that might be appropriate for repayment if future projects are undertaken. We also used the information to illustrate what the repayment potential might be if DOE had a repayment policy and future projects are undertaken. Electric Vehicles: Efforts to Complete Advanced Battery Development Will Require More Time and Funding (GAO/RCED-95-234, Aug. 17, 1995). Fossil Fuels: Lessons Learned in DOE’s Clean Coal Technology Program (GAO/RCED-94-174, May 26, 1994). Fossil Fuels: Improvements Needed in DOE’s Clean Coal Technology Program (GAO/RCED-92-17, Oct. 30, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of Energy's (DOE) cost-sharing arrangements it has with the private sector to fund technology development programs, focusing on the: (1) extent to which DOE requires repayment of its investment in cost-shared technology development; and (2) advantages and disadvantages of repayment. GAO found that: (1) of the many cost-shared technology development programs DOE participates in, only the Clean Coal Technology Program, Metal Initiative Program, Electric Vehicles Advanced Battery Development Program, and Advanced Light Water Reactor Program require repayment of the federal investment if the technology is ultimately commercialized; (2) repayments are collected through royalties and fees from licensing technologies and revenues from commercial sales; (3) each of the programs except the Metals Initiative Program provide for up to a 20-year repayment period; (4) the Metals Initiative Program provides for up to 150-percent repayment, while the other programs limit repayment to 100 percent; (5) while a repayment policy could recover some or all of the federal government's investment, the additional costs and administrative burdens it imposes could discourage industry from commercializing new technologies; (6) the administrative burdens involved in a repayment policy include negotiating, administering, auditing, and enforcing cost-sharing and repayment agreements; and (7) shifting a greater portion of the burden of cost-sharing from government to industry may be preferable to requiring repayment. |
must not increase or decrease total Medicare payments. Physicians could, however, experience increases or decreases in their payments from Medicare, depending on the services and procedures they provide. HCFA published a notice of proposed rulemaking in the June 18, 1997, Federal Register describing its proposed revisions to physician practice expense payments. HCFA estimated that its revisions, had they been in effect in fiscal year 1997, would have reallocated $2 billion of the $18 billion of the practice expense component of the Medicare fee schedule that year. The revisions would generally increase Medicare payments to physician specialties that provide more office-based services while decreasing payments to physician specialties that provide primarily hospital-based services. The revisions could also affect physicians’ non-Medicare income, since many other health insurers use the Medicare fee schedule as the basis for their payments. Some physician groups argued that HCFA based its proposed revisions on invalid data and that the reallocations of Medicare payments would be too severe. Subsequently, the Balanced Budget Act of 1997 delayed implementation of the resource-based practice expense revisions until 1999 and required HCFA to publish a revised proposal by May 1, 1998. The act also required us to evaluate the June 1997 proposed revisions, including their potential impact on beneficiary access to care. HCFA faced significant challenges in revising the practice expense component of the fee schedule—perhaps more challenging than the task of estimating the physician work associated with each procedure. Practice expenses involve multiple items, such as the wages and salaries of receptionists, nurses, and technicians employed by the physician; the cost of office equipment such as examining tables, instruments, and diagnostic equipment; the cost of supplies such as face masks and wound dressings; and the cost of billing services and office space. Practice expenses are also expected to vary significantly. For example, a general practice physician in a solo practice may have different expenses than a physician in a group practice. For most physician practices, the total of supply, equipment, and nonphysician labor expenses is probably readily available. However, Medicare pays physicians by procedure, such as a skin biopsy; therefore, HCFA had to develop a way to estimate the portion of practice expenses associated with each procedure—information that is not readily available. representative sample of physician practices. However, the feasibility of completing such an enormous data collection task within reasonable time and cost constraints is doubtful, as evidenced by HCFA’s unsuccessful attempt to survey 5,000 practices. After considering this option and the limitations of survey data already gathered by other organizations, HCFA decided to use expert panels to estimate the relative resources associated with medical procedures and convened 15 specialty-specific clinical practice expense panels (CPEP). Each panel included 12 to 15 members; about half the members of each panel were physicians, and the remaining members were practice administrators and nonphysician clinicians such as nurses. HCFA provided national medical specialty societies an opportunity to nominate the panelists, and panel members represented over 60 specialties and subspecialties. Each panel was asked to estimate the practice expenses associated with selected procedure codes. Some codes, called “redundant codes,” were assigned to two or more CPEPs so that HCFA and its contractor could analyze differences in the estimates developed by the various panels. For example, HCFA included the repair of a disk in the lower back among the procedures reviewed by both the orthopedic and neurosurgery panels. We believe that HCFA’s use of expert panels is a reasonable method for estimating the direct labor and other direct practice expenses associated with medical services and procedures. We explored alternative primary data-gathering approaches, such as mailing out surveys, using existing survey data, and gathering data on-site, and we concluded that each of those approaches has practical limitations that preclude their use as reasonable alternatives to HCFA’s use of expert panels. Gathering data directly from a limited number of physician practices would, however, be a useful external validity check on HCFA’s proposed practice expense revisions and would also help HCFA identify refinements needed during phase-in of the fee schedule revisions. HCFA staff believed that each of the CPEPs developed reasonable relative rankings of their assigned procedure codes. However, they also believed that the CPEP estimates needed to be adjusted to convert them to a common scale, eliminate certain inappropriate expenses, and align the panels’ estimates with data on aggregate practice expenses. While we agree with the intent of these adjustments, we identified methodological weaknesses with some and a lack of supporting data with others. HCFA staff found that labor estimates varied across CPEPs for the same procedures and therefore used an adjustment process referred to as “linking” to convert the different labor estimates to a common scale. HCFA’s linking process used a statistical model to reconcile significant differences between various panels’ estimates for the same procedure (for example, hernia repair). HCFA used linking factors derived from its model to adjust CPEP’s estimates. HCFA’s linking model works best when the estimates from different CPEPs follow certain patterns; however, we found that, in some cases, the CPEP data deviated considerably from these patterns and that there are technical weaknesses in the model that raise questions about the linking factors HCFA used. HCFA applied two sets of edits to the direct expense data in order to eliminate inappropriate or unreasonable expenses: one based on policy considerations, the other to correct for certain estimates HCFA considered to be unreasonable. The most controversial policy edit concerned HCFA’s elimination of nearly all expenses related to physicians’ staff, primarily nurses, for work they do in hospitals. HCFA excluded these physician practice expenses from the panels’ estimates because, under current Medicare policy, those expenses are covered by payments to hospitals rather than to physicians. We believe that HCFA acted appropriately according to Medicare policy by excluding these expenses. However, shifts in medical practices affecting Medicare’s payments may have resulted in physicians absorbing these expenses. In a notice published in the October 1997 Federal Register, HCFA asked for specific data from physicians, hospitals, and others on this issue. After we completed our field work, HCFA received some limited information, which we have not reviewed. HCFA officials said that they will review that information to determine whether a change in their position is warranted. If additional data indicate that this practice occurs frequently, it would be appropriate for HCFA to determine whether Medicare reimbursements to hospitals and physicians warrant adjustment. HCFA also limited some administrative and clinical labor estimates that it believes are too high. Specifically, HCFA believes that (1) the administrative labor time estimates developed by the CPEPs for many diagnostic tests and minor procedures seemed excessive compared with the administrative labor time estimates for a midlevel office visit; and (2) the clinical labor time estimates for many procedures appeared to be excessive compared with the time physicians spend in performing the procedures. Therefore, HCFA capped the administrative labor time for several categories of services at the level of a midlevel office visit. Furthermore, with certain exceptions, HCFA capped nonphysician clinical labor at 1-1/2 times the number of minutes it takes a physician to perform a procedure. HCFA has not, however, conducted tests or studies that validate the appropriateness of these caps and thus cannot be assured that they are necessary or reasonable. Various physician groups have suggested that HCFA reclassify certain administrative labor activities as indirect expenses. Such a move could eliminate the need for limiting some of the expert panels’ administrative labor estimates, which some observers believe are less reliable than the other estimates they developed. HCFA officials said that they are considering this possibility. Finally, HCFA adjusted the CPEP data so that it was consistent in the aggregate with national practice expense data developed from the American Medical Association’s (AMA) Socioeconomic Monitoring System (SMS) survey—a process that it called “scaling.” HCFA found that the aggregate CPEP estimates for labor, supplies, and equipment each accounted for a different portion of total direct expenses than the SMS data did. For example, labor accounted for 73 percent of total direct expenses in the SMS survey data but only 60 percent of the total direct expenses in the CPEP data. To make the CPEP percentages mirror the SMS survey percentages, HCFA inflated the CPEPs’ labor expenses for each code by 21 percent and the medical supply expenses by 6 percent and deflated the CPEPs’ medical equipment expenses by 61 percent. that supports all or nearly all services provided by a practice, such as an examination table, HCFA assumed a utilization rate of 100 percent. Scaling provided HCFA with a cap on the total amount of practice expenses devoted to equipment that was not dependent upon the equipment rate assumptions HCFA used. While HCFA officials acknowledge that their equipment utilization rate assumptions are not based on actual data, they claim that the assumptions are not significant for most procedures since equipment typically represents only a small fraction of a procedure’s direct expenses. The AMA and other physician groups that we contacted have said, however, that HCFA’s estimates greatly overstate the utilization of most equipment, which results in underestimating equipment expenses used in developing new practice expense fees. HCFA agrees that the equipment utilization rates will affect each medical specialty differently, especially those with high equipment expenses, but HCFA staff have not tested the effects of different utilization rates on the various specialties. In a notice in the October 1997 Federal Register, HCFA asked for copies of any studies or other data showing actual utilization rates of equipment, by procedure code. This is consistent with the Balanced Budget Act of 1997 requirement that HCFA use actual data in setting equipment utilization rates. It is not clear whether beneficiary access to care will be adversely affected by Medicare’s new fee schedule payments for physician practice expenses. This will depend upon such factors as the magnitude of the Medicare payment reductions experienced by different medical specialties, other health insurers’ use of the fee schedule, and fees paid by other purchasers of physician services. 20 percent, and 11 percent, respectively, for these specialties once the new practice expense component of the fee schedule is fully implemented in 2002. Additionally, Medicare payments for surgical services were reduced by 10.4 percent beginning in 1998 as a result of provisions contained in the Balanced Budget Act. The combined impact of the proposed and prior changes on physicians’ incomes will affect some medical specialties more than others. Therefore, there is a continuing need to monitor indicators of beneficiary access to care, focusing on services and procedures with the greatest reductions in Medicare payments. Even though HCFA has made considerable progress developing new practice expense fees, much remains to be done before the new fee schedule payments are implemented starting in 1999. For example, HCFA has not collected actual data that would serve as a check on the panels’ data and as a test of its assumptions and adjustments. Furthermore, HCFA has done little in the way of conducting sensitivity analyses to determine which of its adjustments and assumptions have the greatest effects on the proposed fee schedule revisions. There is no need, however, for HCFA to abandon the work of the expert panels and start over using a different methodology; doing so would needlessly increase costs and further delay implementation of the fee schedule revisions. The budget neutrality requirement imposed by the Congress means that some physician groups would benefit from changes in Medicare’s payments for physician practice expenses to the detriment of other groups. As a result, considerable controversy has arisen within the medical community regarding HCFA’s proposed fee schedule revisions, and such controversy can be expected to continue following issuance of HCFA’s next notice of proposed rulemaking, which is due May 1, 1998. Similar controversy occurred when Medicare initially adopted a resource-based payment system for physician work in 1992. Since that time, however, medical community confidence in the physician work component of the fee schedule has increased. give physicians greater assurance that the revisions HCFA proposes are appropriate and sound. HCFA officials said that they would carefully review and consider each of our recommendations as they develop their rule. Mr. Chairman, this concludes my statement. I will be happy to answer your questions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the efforts of the Health Care Financing Administration (HCFA) to revise the practice expense component of Medicare's physician fee schedule. GAO noted that: (1) HCFA's general approach for collecting information on physicians' practice expenses was reasonable; (2) HCFA convened 15 panels of experts to identify the resources associated with several thousand services and procedures; (3) HCFA made various adjustments to the expert panels' data that were intended to: (a) convert the panels' estimates to a common scale; (b) eliminate expenses reimbursed to hospitals rather than to physicians; (c) reduce potentially excessive estimates; and (d) ensure consistency with aggregate survey data on practice expenses for equipment, supplies, and nonphysician labor; (4) while GAO agrees with the intent of these adjustments, GAO believes that some have methodological weaknesses, and other adjustments and assumptions lack supporting data; (5) HCFA has done little in the way of performing sensitivity analyses that would enable it to determine the impact of the various adjustments, methodologies, and assumptions, either individually or collectively; (6) such sensitivity analyses could help determine whether the effects of the adjustments and assumptions warrant additional, focused data gathering to determine their validity; (7) GAO believes this additional work should not, however, delay phase-in of the fee schedule revisions; (8) since implementation of the physician fee schedule in 1992, Medicare beneficiaries have generally experienced very good access to physician services; (9) the eventual impact of the new practice expense revisions on Medicare payments to physicians is unknown at this time, but they should be considered in the context of other changes in payments to physicians by Medicare and by other payers; (10) recent successes in health care cost control are partially the result of purchasers and health plans aggressively seeking discounts from providers; (11) how Medicare payments to physicians relate to those of other payers will determine whether the changes in Medicare payments to physicians reduce Medicare beneficiaries' access to physician services; and (12) this issue warrants continued monitoring, and possible Medicare fee schedule adjustments, as the revisions are phased in. |
DOD’s MCRS-16, which was completed in February 2010, was to provide senior leaders with a detailed understanding of the range of mobility capabilities needed for possible future military operations and help leaders make investment decisions regarding mobility systems. The study was driven by strategy current at the time. The study scope included, among other things, the way changes in mobility systems affect the outcomes of major operations and an assessment of the associated risks. MCRS-16 had several objectives, including to determine capability shortfalls (gaps) and excessesmobility force structure, provide a risk assessment, and identify the capabilities and requirements to support national strategy. (overlaps) associated with programmed In order to assess mobility capabilities, DOD officials responsible for the MCRS-16 used three cases to evaluate a broad spectrum of military operations that could be used to inform decisions regarding future mobility capabilities. The three cases are described below: Case 1: U.S. forces conduct two nearly simultaneous large-scale land campaigns and at the same time respond to three nearly simultaneous homeland defense events. Case 2: U.S. forces conduct a major air/naval campaign concurrent with the response to a large asymmetricsignificant homeland defense event. campaign and respond to a Case 3: U.S. forces conduct a large land campaign against the backdrop of an ongoing long-term irregular warfare respond to three nearly simultaneous homeland defense events. Irregular warfare is a violent struggle among state and nonstate actors for legitimacy and influence over the relevant population(s). required, a potential shortfall would exist and there could be a risk that the mission might not be accomplished. If DOD had more aircraft than required, a potential excess could exist, and there could be risk that resources could be expended unnecessarily on a mobility capability. In January 2012, DOD issued Sustaining U.S. Global Leadership: Priorities for 21st Century Defense, which describes the projected security environment and the key military missions for which DOD will prepare. DOD may make force and program decisions in accordance with the strategic approach described in this guidance, which could differ from the guidance—the National Military Strategy—that was used by the MCRS-16 to determine requirements. The new strategic guidance is intended to help inform decisions regarding the size and shape of the force, recognizing that fiscal concerns are a national security issue. To support the new strategic guidance and remain within funding constraints, the Air Force has proposed changes concerning the retirement of aircraft in its airlift fleet. Specifically, in February 2012, the Air Force proposed to Retire the oldest 27 C-5 aircraft, thereby reducing the fleet to 275 strategic airlift aircraft—which, according to the Air Force, would consist of 223 C-17s and 52 C-5s. Retire the 65 oldest C-130 aircraft—the primary aircraft used in DOD’s intratheater airlift mission—thereby reducing the fleet to 318 C-130s. Retire or cancel procurement of all 38 planned C-27 aircraft, which were intended to meet time-critical Army missions. While the MCRS-16 included some useful information concerning air mobility systems, the report did not clearly meet two of its objectives because it did not provide decision makers with specific information concerning (1) shortfalls and excesses associated with the mobility force structure or (2) risks associated with shortfalls or excesses of its mobility capabilities. Moreover, the MCRS-16 generally did not make recommendations about air mobility capabilities. These weaknesses in the MCRS-16 raise questions about the ability of the study to provide decision makers with information needed to make programmatic decisions. In addition, DOD’s January 2012 strategic guidance could affect its air mobility requirements. I will first address the issues related to DOD’s MCRS-16, and then turn to a discussion of the new strategic guidance. The MCRS-16 did not meet its objective to identify shortfalls and excesses in most of its assessments of mobility systems. For each of the three cases of potential conflicts or natural disasters DOD used in the MCRS-16, the department identified the required capabilities for air mobility systems. However, the MCRS-16 stopped short of explicitly stating whether a shortfall or excess existed. Moreover, it did not make recommendations regarding the need for any changes to air mobility assets based on any shortfalls or excesses. Using DOD data from the MCRS-16, we were able to discern possible shortfalls or potential capacity that could be considered excess or used as an operational reserve even though the MCRS-16 report was ambiguous regarding whether actual shortfalls or excess capabilities existed (see figure). The C-27 Spartan is a mid-range, multifunctional aircraft. Its primary mission is to provide on-demand transport of time-sensitive, mission-critical supplies and key personnel to forward-deployed Army units, including those in remote and austere locations. Its mission also includes casualty evacuation, airdrop, troop transport, aerial sustainment, and homeland security. As shown in the figure, the MCRS-16 determined that in each case, there was unused strategic airlift capacity, but the study did not specifically state whether the unused capacity represented excesses or identify excesses by aircraft type. When an excess exists, decision makers need to know which aircraft and how many could be retired. Specifically, the MCRS-16 did not identify the required number of C-5s or excesses of C-5 aircraft; but at the time of our report, the Air Force stated its intention to seek the retirement of 22 C-5s, which it increased to 27 and proposed again in February 2012. Furthermore, the MCRS-16 did not identify the most combat-effective or the most cost-effective fleet of aircraft even though DOD had previously stated that the MCRS-16 would set the stage to address the cost-effectiveness of its strategic aircraft. Decision makers rely on studies such as the MCRS-16 so that they can make informed choices to address mobility shortfalls and excesses. In our December 2010 report, we recommended that DOD explicitly identify the shortfalls and excesses in the mobility systems that DOD analyzed for the MCRS-16 and provide this additional analysis to DOD and congressional decision makers. In commenting on our draft report, DOD disagreed with our recommendations, stating that the MCRS-16 explicitly identifies shortfalls and excesses in the mobility system. DOD identified strategic airlift as an example of an excess. While the MCRS-16 showed that there was unused capacity associated with strategic airlift, it was not clear from the study whether this unused capacity could serve as an operational reserve. If the study had clearly identified an excess in strategic lift capabilities, decision makers may have chosen to retire aircraft and reallocate resources to other priorities or to keep an operational reserve to militate against unforeseen events. Similarly, if the study had identified a shortfall in strategic lift capabilities, decision makers may have chosen to accept the operational risk or sought to address the shortfall by increasing capabilities. DOD has not taken action based on our recommendation, but we continue to believe that explicitly identifying the shortfalls and excesses in mobility systems is useful to decision makers in making programmatic decisions. The MCRS-16 also did not clearly achieve its study objective to provide Assessing risk related to shortfalls and excesses is risk assessments.important—the risk associated with shortfalls is that the mission might not be accomplished, while the risk associated with excesses is that resources may be expended unnecessarily on a mobility capability. However, the MCRS-16 did not include risk assessments of airlift systems. For example, the MCRS-16 showed potential excesses in strategic and intratheater aircraft but did not identify the risk associated with these potential excesses. Furthermore, the MCRS-16 identified a reduced intratheater airlift fleet (401 C-130s) in comparison with the previous fleet (a maximum of 674 C-130s), but it did not describe the level of risk associated with this reduced fleet size. Concerning air refueling, the MCRS-16 reported that airborne tanker demand exceeded tanker capacity by 20 percent in MCRS-16 case two but did not identify the risk associated with that potential shortfall. In our December 2010 report, we recommended that DOD provide a risk assessment for potential shortfalls and excesses and provide this additional analysis to department and congressional decision makers. DOD disagreed, stating that MCRS-16 included a risk assessment which links the ability of mobility systems to achieve warfighting objectives. Therefore, DOD has not taken action on this recommendation. While warfighting risk metrics can inform decision makers concerning overall mobility capabilities, decision makers would benefit from knowing the risk associated with particular mobility systems as they make force structure decisions. Quantifying the risk associated with specific mobility systems could help with decisions to allocate resources, enabling decision makers to address the most risk at the least cost. In January 2012, DOD issued new strategic guidance, Sustaining U.S. Global Leadership: Priorities for 21st Century Defense, that will help guide decisions regarding the size and shape of the force. The strategic guidance is to ensure that the military is agile, flexible, and ready for the full range of contingencies. However, the strategic guidance includes changes from previous strategy—for example, U.S. forces will no longer be sized to conduct large-scale, prolonged stability operations.past, DOD has translated strategic guidance into specific planning In the scenarios, which DOD has used in studies (such as the MCRS-16) to generate requirements that inform force structure decisions. Based on the new strategic guidance, the Air Force has proposed changes to the mobility air fleet, including the retirement or cancellation of procurement of 130 mobility aircraft. According to Air Force officials, the proposals ensure that the Air Force can deliver the capabilities required by the new strategic guidance and remain within funding levels. However, the Air Force’s February 2012 document that outlines its proposed aircraft retirements does not provide details of any analyses. Given the new strategic guidance—which articulates priorities for a 21st century defense—it is unclear the extent to which the requirements developed from the MCRS-16 are still relevant. In weighing the Air Force’s proposal, decision makers will require additional information concerning what types of potential military operations are envisioned by the strategic guidance and to what extent DOD has analyzed its planned force structure using cases that reflect the new strategic guidance. In conclusion, the MCRS-16 study did not fully provide congressional decision makers with a basis for understanding what mobility systems are needed to meet requirements, how many are needed, and what are the risks of having too many or not enough of each aircraft to meet defense strategy. While DOD disagreed with our recommendations, we continue to believe that the study missed opportunities to identify specific shortfalls and excesses and did not provide associated risk assessments. Further, the MCRS-16 study was completed more than 2 years ago using defense planning guidance in effect at that time. With DOD’s newly issued strategic guidance on defense priorities, the department’s potential scenarios may have changed. Decision makers would benefit from a clear understanding from DOD of the basis for the proposed aircraft retirements and DOD’s ability to execute its new strategic guidance with its planned air mobility force structure. Chairman Akin and Ranking Member McIntyre, and members of the subcommittee, this concludes my prepared statement. I am happy to answer any questions that you may have at this time. For further information regarding this testimony, please contact Cary Russell at (404) 679-1808 or russellc@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Alissa H. Czyz, Assistant Director, James P. Klein, Ronald La Due Lake, Richard B. Powelson, Michael C. Shaughnessy, Jennifer B. Spence, Amie M. Steele, Joseph J. Watkins, and Stephen K. Woods. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Over the past 30 years, the Department of Defense (DOD) has invested more than $140 billion in its airlift and tanker forces. In 2010, DOD published its Mobility Capabilities and Requirements Study 2016 (MCRS-16), which was intended to provide an understanding of the range of mobility capabilities needed for possible military operations. In January 2012, DOD issued new strategic guidance, Sustaining U.S. Global Leadership: Priorities for 21st Century Defense , affecting force structure decisions. This testimony addresses GAOs previous findings on the MCRS-16 and air mobility issues to consider in light of DODs new strategic guidance. GAOs December 2010 report on the MCRS-16 (GAO-11-82R) is based on analysis of DODs executive summary and classified report, and interviews with DOD officials. The Mobility Capabilities and Requirements Study 2016 (MCRS-16) provided some useful information concerning air mobility systemssuch as intratheater airlift, strategic airlift, and air refuelingbut several weaknesses in the study raised questions about its ability to fully inform decision makers. In particular, the MCRS-16 did not provide decision makers with recommendations concerning shortfalls and excesses in air mobility systems. In evaluating capabilities, the MCRS-16 used three cases that it developed of potential conflicts or natural disasters and identified the required capabilities for air mobility systems. Based on data in the MCRS-16, GAO was able to discern possible shortfalls or potential capacity that could be considered excess or an operational reserve, even though the MCRS-16 was ambiguous regarding whether actual shortfalls or excess capabilities exist. It also did not identify the risk associated with potential shortfalls or excesses. Identifying the risk associated with specific mobility systems could help with decisions to allocate resources. The Department of Defense (DOD) issued new strategic guidance in January 2012, which is intended to help guide decisions regarding the size and shape of the force. In the past, DOD has translated strategic guidance into specific planning scenarios, which it used in studies (such as the MCRS-16) to generate requirements that inform force structure decisions. Based on the new strategic guidance, the Air Force has proposed reducing its mobility air fleet by 130 aircraft, which would leave 593 mobility aircraft in the airlift fleet. According to Air Force officials, the proposals will enable the Air Force to deliver the airlift capabilities required to implement the new strategic guidance and remain within funding levels. However, the Air Forces document that outlines its proposed aircraft retirements does not provide details of any analyses used to support the reductions. Given the new strategic guidance, it is unclear the extent to which the requirements developed from MCRS-16 are still relevant. In weighing the Air Forces proposal, decision makers would benefit from a clear understanding from DOD of the basis for the proposed aircraft retirements and DODs ability to execute its new strategic guidance with its planned air mobility force structure. GAO previously recommended that DOD clearly identify shortfalls and excesses in the mobility force structure and the associated risks. DOD did not concur with the recommendations, stating that the MCRS-16 identified shortfalls and excesses and included a risk assessment. GAO disagreed, noting for example, that DODs MCRS-16 study did not explicitly identify excess aircraft and did not include mobility system risk assessments when potential shortfalls existed. |
The Department of Energy’s multiprogram laboratories have had missions that are national in scope since their inception during World War II. The original laboratories—Lawrence Berkeley (Calif.), Los Alamos (N. Mex.), and Oak Ridge (Tenn.)—were established as government-owned, contractor-operated institutions to apply the productive capability of private industry to the development of atomic weapons. The weapons-development mission continued during the cold war, and six additional laboratories—Argonne (Ill.), Brookhaven (N.Y.), Sandia (N. Mex. and Calif.), Idaho Engineering (Idaho), Lawrence Livermore (Calif.) and Pacific Northwest (Wash.)—were created between 1946 and 1965 to foster civilian applications of nuclear technology. A 10th laboratory, the Solar Energy Research Institute, was designated a national laboratory in 1991 to expand federal energy research and development (R&D) capability in alternative energy sources, and it was renamed the National Renewable Energy Laboratory (Colo.). As a group, the 10 laboratories are known as the national laboratories. As the laboratories’ experience and research capability evolved, mission emphases shifted among them. Sandia, Los Alamos, and Lawrence Livermore acquired primary responsibility for nuclear weapons research and development and the largest share of the laboratories’ funds. Responsibility for research in the environmental and biological, energy, and national security areas was distributed among all 10 laboratories to varying degrees. However, the Congress and DOE are reassessing this mission configuration. Since 1980, the Congress has had an active interest, expressed in a series of laws, in seeing that more of the national laboratories’ outputs be put to commercial uses. Changing needs for defense technology resulting from the end of the cold war and concern with maintaining U.S industry’s competitiveness in global markets have led several members of Congress to open a public debate and propose new legislation that addresses the national laboratories’ missions, structure, and cooperation with industry.Among the alternatives being considered in the public debate are reducing all the laboratories’ budgets, consolidating or closing some of them, and redirecting their weapons development mission toward commercial product-related R&D in such areas as technology development for environmental restoration, energy, and high-performance computing. Underlying these discussions are questions about the type of R&D activities the national laboratories are performing now, the nature and scope of their outputs, and their potential for assisting industry in bringing technology to the marketplace. This report is an effort to inform the debate by providing an empirical base for these questions, as a starting point for addressing the broader issues. It examines whether the balance of laboratories’ effort is in basic and applied research or research related to commercial product development, the distribution of the laboratories’ research outputs, and their potential for commercial application. Findings were based on a cross-section of the laboratories’ R&D activities for the period 1989-92. However, the objectives for most of the programs in the study population were initiated before the national laboratories’ legislative mandate for technology transfer in the National Competitiveness Technology Transfer Act took effect in late 1989. In most fields of R&D, more than 4 years are required for outputs to evolve after objectives have been established. Therefore, the commercial product-related effort we found is to be considered a baseline against which future activities and outputs can be measured. We began our work by developing a comprehensive description of current research activities in the 10 laboratories. We chose to survey the laboratories directly because we could find no sufficiently comprehensive existing documentation. We collected our data through a survey of the 10 laboratories’ research programs and the facilities and equipment that support them. The survey scope consisted of all major research programs and facilities with costs of at least $10 million, as well as special nominations by the laboratories themselves of other less costly programs and facilities. These two criteria were designed to ensure that all large subprograms and smaller subprograms that were important to the laboratories’ missions would be included in our sample. This allowed us to describe the laboratories’ major research efforts. However, findings based on these criteria should not be considered representative of a laboratory’s entire research effort since the proportion of programs budgeted at less than $10 million can vary from one laboratory to another. DOE’s Budget and Reporting System categories provided a common classification scheme for the laboratories’ 12 research programs, which permitted cross-laboratory comparisons of program characteristics. Research program and subprogram names are shown in table 1. The data we collected on these programs covered fiscal years 1989-92. We conducted pilot tests of the survey methodology and data collection instruments at Brookhaven and the National Renewable Energy Laboratory. We then revised the instrument and administered one version to the remaining eight laboratories. After we processed the survey responses, we asked each laboratory to confirm by letter that our list of research programs and facilities was, in fact, complete. The national laboratories engage in a wide range of defense and nondefense R&D-related activities. These range from generating hypotheses and testing fundamental science principles to assisting a potential user in adapting laboratory outputs to a production or service delivery system. To analyze the extent to which the laboratories are engaged in basic and applied research or research related to commercial product development, we divided their activities into five categories: basic research, applied research, development, technology transfer, and technical assistance. Basic research is research undertaken primarily to gain fuller knowledge or understanding of a subject and to contribute to the knowledge base in the field of investigation. Applied research is research directed toward the practical use of knowledge or understanding of a subject to meet a recognized need. Development is research directed toward the production of useful materials, devices, systems, or methods, including the design and development of prototypes or processes. Development has some type of product as the output goal, but may conclude with a prototype rather than a usable good. Additional time, research, and testing are usually required to convert the prototype to a weapon or commercially viable product. Because the national laboratories perform R&D only through the development stage, additional mechanisms and arrangements are required to achieve application of the laboratories’ outputs in the public or private sector. These activities are technology transfer and technical assistance. Technology transfer is the process that fosters the use of devices, processes, “know-how,” or scientific and technical information produced in a national laboratory by universities, private industry, or government agencies. It includes making potential users aware of the laboratories’ research outputs, assisting in their selection or use, and collaborating with representatives of private industry and public or nonprofit institutions to ensure that some of the laboratories’ outputs will have commercial or public applications. Technical assistance applies the laboratory’s expertise to practical problems but does not involve the use of a laboratory’s outputs. It is any form of assistance, other than financial, to a state or local government or a business, including publications, workshops, conferences, studies, or telephone consultation. Development, technical assistance, and technology transfer are the three national laboratory research activities related to commercial product development. All five categories, already used by the laboratories but specially grouped for our analysis, constitute a natural framework that, together with DOE’s program classification scheme, allowed us to look at R&D-related activity across all 10 laboratories, using expenditures as a measure of activity. Recognizing that the laboratories do not maintain records of their R&D expenditures in terms of our five categories, we asked managers of the subprograms in our study population to estimate, for each subprogram they managed, the proportion of funds expended in each of the five areas. Our analysis of R&D activity is therefore presented as percentages, not actual dollar values. To provide a context for considering our findings, we present in table 2 the fiscal year 1992 budgets for subprograms in the study population that were included in our analysis and the laboratories’ total budgets in fiscal year 1992. We also examined the laboratories’ outputs. As output measures, we selected products of laboratory R&D that were clearly identifiable to our respondents and for which they were likely to maintain records. Since our study objective was to examine the balance among laboratory activities rather than their impact, we focused on outputs of R&D activity that occurred within the laboratories rather than their efforts at job creation or increased sales. Because of great variation in the size, scope, field of investigation and funding level of the subprograms in the study population, both within and among laboratories, we presented our findings as simple tabulations, rather than as standardized units. Use of a single measure for standardizing the outputs, such as dollar of funding per output, would have failed to account for variations among the subprograms on other dimensions. Moreover, because of the institutional complexity this variation represents, we interpreted our output findings very conservatively, treating them as measures of activity rather than indicators of performance. We looked at the outputs in two broad categories: (1) publications and reports and (2) outputs related to commercial product development. The outputs attributed to each category are described in the Principal Findings section. Finally, we looked at three other indicators—the formation of cooperative R&D agreements, R&D effort devoted to critical technologies, and program managers’ assessment of their on-going research—to gauge the laboratories’ potential for commercial product development. We conducted our review in accordance with generally accepted government auditing standards. To examine the balance of the national laboratories’ current R&D-related activities, we analyzed the distribution of laboratory expenditures for R&D within and among laboratories and research programs. For the 10 laboratories overall, R&D-related activity was almost evenly divided between basic and applied research on the one hand, and research related to commercial product development on the other. Approximately 8 percent more of the effort was devoted to R&D activities related to commercial product development, as shown in figure 1. More applied research than basic research was conducted: 27.2 percent versus 17.4 percent. Among research activities related to commercial product development, most (30.9 percent) was development, but more activity was devoted to technical assistance (14.4 percent) than technology transfer (7 percent). Thus, R&D-related activity directly targeted on potential commercial applications of the laboratories’ outputs currently constitutes the smallest proportion of the laboratories’ R&D-related effort. Despite its small size, however, this level of effort exceeds the laboratories’ minimum statutory requirement for technology transfer activity. These overall percentages, however, mask major differences among the laboratories with regard to R&D funding distribution. (See table IV.1 in appendix IV.) Four laboratories—Argonne, Lawrence Berkeley, Oak Ridge, and Brookhaven—spent 25 percent or more of their research funds on basic research. These laboratories account for over half (59.3 percent) of the total national laboratory research budget that is spent on basic research. (See table IV.2.) Los Alamos spent 19.4 percent of its R&D funds to support its mission to perform “basic research in selected disciplines that help maintain an outstanding science and technology base.” Only about 10 percent or less of the laboratory research budget was spent on basic research at the other laboratories. The energy research program accounted for the greatest proportion of funds spent on basic research, both within and among research program areas. (See tables IV.3 and IV.4.) As table IV.1 shows, four laboratories—Oak Ridge, Pacific Northwest, Lawrence Livermore, and Los Alamos—spent 29 percent or more of their research funds on applied research. Among the 10 laboratories, Lawrence Livermore and Los Alamos accounted for almost half (47.9 percent) of applied research expenditures. (See table IV.2.) Most applied research was supported by programs in the areas of defense, energy research, and work for others. (See table IV.4.) As noted earlier, most of the laboratories’ development work, the most product-oriented of R&D activities, was devoted to defense, rather than nondefense, research. Almost three-quarters (71.5 percent) of all the laboratories’ development research was conducted at Lawrence Livermore, Los Alamos, and Sandia. (See table IV.2.) In turn, the largest share of development research was performed in the defense and nuclear energy programs. (See tables IV.3 and IV.4.) Therefore, while it is true that across the 10 laboratories, a greater proportion of research funding was devoted to activities more closely related to commercial product development than to basic and applied research, most of these funds currently support defense research. To determine whether this research will have commercial opportunities for use, we examined the national laboratories’ outputs. A second measure of the type of effort in which the national laboratories are engaged—as between basic and applied research or research related to commercial product development—is output. The laboratories produce two major types of outputs: (1) publications and reports, and (2) outputs related to commercial product development. Table 3 shows that, across a 4-year period, most of the laboratories’ outputs were publications and reports. This finding was expected because reports and publications are the primary mechanisms for diffusion of R&D findings, and they are prepared at all stages of the R&D process. Reports, conference papers, and published articles, which can be produced more quickly than books and book chapters, substantially outnumber the latter. As we discussed above, a slightly higher percentage of the laboratories’ expenditures was devoted to R&D activities related to commercial product development than to basic and applied research; nevertheless, few of their outputs were commercial product-related. Prototype devices and materials, algorithms, and software are the largest number of outputs in this group. These outputs tend to arise from the development stage of the R&D process, which often occurs several years before production of a marketable or usable good. Not all outputs of the development stage will, of course, achieve commercial application. Most of the prototype devices and materials, algorithms, and software, as indicated in tables V.1 and V.2 in appendix V, were produced at the weapons laboratories, and most were funded by DOE’s defense program. Other outputs laboratory managers identified as commercial products or commercial processes also tend to arise from the development stage. Although they will require a substantial additional investment before they are ready to market, these products or processes will more likely result in actual commercial applications because a potential commercial use has already been identified. Most of these outputs were produced by Los Alamos, Sandia, and Pacific Northwest, and the defense program supports most of the research that has led to these outputs. The point here is that although defense-funded R&D has produced more outputs that could lead to commercial products, whether these outputs will achieve commercial application is still unknown. Patent applications may be submitted for inventions throughout the entire R&D process, but a license is usually acquired only when a decision to market a technology has been made. The number of licenses awarded, therefore, is a stronger measure of output activity related to commercial product development than the number of patents. A trend in the data indicative of the laboratories’ production of outputs related to commercial product development is the increase in the number of licenses awarded during fiscal years 1989 through 1992. (See table 3.) In fiscal year 1992, Sandia and Pacific Northwest awarded the most licenses, and most licensed outputs were supported by defense program research. (See tables V.1 and V.2.) We expected to find that most commercial product-related outputs were supported by research programs that spent most of their R&D funds for development. However, the R&D expenditures of those programs that supported the most outputs related to commercial product development covered the range of R&D activities. We found that in fiscal year 1992, four research programs—energy research, conservation and renewable energy, defense, and work for others—supported most of the commercial product-related outputs of all types and that, over 4 years, commercial product-related output production had been increasing each year in three of the programs, as shown in figure V.1. We also found that in fiscal year 1992, the largest proportion of expenditures in the defense and conservation and renewable energy programs was for development. As expected, the defense and conservation and renewable energy programs supported more of the outputs specifically designated as commercial products and processes than any of the 10 other research programs. However, in looking more closely at these four programs, we found some interesting differences. Work for others, which supports more commercial product and process-type outputs than eight other programs, devoted a slightly higher proportion of R&D expenditures to applied research than to development. But in energy research, which supports more commercial product- and process-related outputs than nine other programs, the largest proportion of expenditures was for basic research. (See table V.3.) We looked at three indicators of the national laboratories’ potential for commercial product development: (1) formation of cooperative research and development agreements; (2) proportion of R&D expenditures in critical technology areas; and (3) research program managers’ judgments about their programs’ outputs. Of these three, the most frequently used indicator of the national laboratories’ potential for commercial product development is the formation of CRADAs. Here we found a major increase in activity. The national laboratories reported that from fiscal year 1989 through 1992, they entered into 196 CRADAs. Among programs in the study population in operation all 4 years, the number of new CRADAs formed increased from 17 in fiscal 1989 to 130 in fiscal 1992. Sandia and Oak Ridge laboratories were most active in entering into CRADAs. (See table VI.1. in appendix VI.) Most were formed for research sponsored by programs in the defense and conservation and renewable energy areas. (See table VI.2.) The greatest increase in CRADA formation occurred at Sandia, where 74 CRADAs were in effect in fiscal year 1992. Fifty-three of the CRADAs effective in fiscal year 1992 were sponsored by the defense program technology transfer initiative at Sandia. This subprogram was initiated in June 1990 to identify opportunities for commercializing technologies produced by DOE-funded defense research activities in such areas as advanced manufacturing and precision engineering, materials and processes, advanced microelectronics and photonics, and computer architecture and applications. Although the national laboratories do not yet have a legislative mandate or mission for research in the critical technologies, their research program managers reported that 74.1 percent of R&D expenditures are devoted to work in critical technology areas. This research was distributed over the 22 areas identified by the National Critical Technologies Panel, with the greatest concentration in energy technologies (13.6 percent); pollution minimization, remediation, and waste management (8.8 percent); computer simulation and modeling (6.7 percent); and materials synthesis and processing (6.2 percent). (See table VI.3.) Work in these critical technology areas was distributed broadly among the laboratories and research programs. Five laboratories—Argonne, Lawrence Berkeley, Oak Ridge, Idaho, and Lawrence Livermore—devoted approximately 20-30 percent of their research funds to energy technologies. (See table VI.3.) Pacific Northwest expended the greatest proportion of R&D funds (41.3 percent) on pollution minimization technologies. Idaho and Lawrence Livermore were most active in computer simulation and modeling. Oak Ridge and Los Alamos devoted the greatest percentage of effort to materials synthesis and processing. As a group, the laboratories devoted approximately three-fourths of their R&D expenditures to research in critical technology areas, but Sandia and Los Alamos expended only about half of their resources on critical technologies research. All of the research programs sponsored research in critical technologies to some degree, with the least effort expended by environment, safety, and health. (See table VI.4.) Finally, laboratory research program managers’ judgments about their research programs’ potential for commercial product development were optimistic. Among the subset of all national laboratory programs with a potential for commercial product development, almost 58 percent of the program managers expected that development to occur within 5 years of fiscal 1992. (See figure VI.1.) An additional 27.6 percent reported that their program has the potential for commercial product development within 5-10 years. As of 1992, the national laboratories spent slightly more than half of their R&D funds on research related to commercial product development. However, most of this R&D was performed at the weapons laboratories and was supported by the defense and nuclear energy programs. Analysis of the outputs produced by the national laboratories indicated that defense-funded research produced more outputs—prototype devices and materials, algorithms, software, and other products and processes that have an identified commercial application—that are precursors to marketable goods, but at this point, whether they will achieve commercial application is not known. Moreover, three indicators of the laboratories’ potential for commercial product development—CRADA formation, critical technology research, and program managers’ expectations for commercial potential—showed that some activity was occurring. CRADA formation was increasing, but these arrangements ensure only that collaboration between the laboratories and industry will occur, not that a commercial product will be generated. Almost three-fourths of the laboratories’ effort was devoted to research in critical technology areas, but achievement of commercial application will not be known for several years. Over half of the managers of research subprograms that have commercial product potential expected innovations to arise within 5 years, but these expectations must be considered “best educated guesses.” While we can conclude, therefore, that the national laboratories’ were engaged in slightly more research related to commercial product development than basic and applied research, it is too early to determine whether this activity will produce technologies with commercial uses. We requested comments on a draft report and received a response from DOE and the 10 national laboratories. DOE questioned the definitions and categories we defined to analyze the laboratories’ R&D-related activities and our finding that the laboratories perform slightly more research related to commercial product development than basic and applied research. DOE also thought that this study should have examined additional institutional factors, including the R&D activities of other agencies, and should have used data maintained by DOE headquarters rather than surveyed the laboratories for data. We note that the definitions for R&D-related activities we employed are derived from a Congressional Budget Office study of the federal R&D enterprise, our study of the Technology Transfer Act of 1986, and expert opinion. We also disagree with DOE’s proposed broader scope for this study because it exceeds our study objective and would have required additional data collection and analyses that are beyond the study scope. Furthermore, our exploration of data available at DOE headquarters found that it was not adequate to satisfy our information needs. Eight laboratories agreed with the report’s objective, analyses, and conclusions. However, one of this group, Lawrence Berkeley, thought that the relationship of commercial product development to the broader needs of industry and the nation should have been addressed in the study. Two of the laboratories raised issues about study methodology. Idaho believed that a greater proportion of the budget for its subprograms should have been included in the study sample. Oak Ridge questioned the effect of the study’s sampling methodology on output findings for the laboratory and the definition of the category called outputs related to commercial product development. Lawrence Berkeley said that we had overlooked an important issue. The laboratory thought that the study should have included an examination of the relationship of the national laboratories’ role in commercial product development to the broader needs of industry and the nation. We agree that this issue is important to address as part of the public debate about the laboratories’ missions and structure. However, we disagree that it should have been examined in this report, which focuses on establishing an empirical baseline of national laboratories’ activities. DOE’s Idaho Operations Office responded for Idaho National Engineering Laboratory. The Idaho Operations Office said that the budget figure reported for Idaho subprograms included in the study sample should have been higher. We did not agree to revise Idaho’s budget figure, because to do so would have violated the study methodology used to sample programs at other laboratories. Oak Ridge took the position that most of its commercial product-related outputs were produced by subprograms that were not selected in the study sample because they were funded at less than $10 million. The laboratory expressed concern that the subprograms we sampled produced only 7 percent of its commercial product-related outputs while representing 73 percent of its overall budget. Oak Ridge based this position on summary output data for the entire laboratory and sampled subprograms that laboratory representatives had tabulated. Again, we could not include the output data for Oak Ridge’s unsampled programs in our analyses without violating the sampling methodology. We also had some questions about the large number of outputs the Oak Ridge analysis ascribed to unsampled programs. Oak Ridge also thought that our definitions for these outputs equated the laboratories’ development work with commercial product development. We disagree. The definitions we used make it clear that the laboratories were not expected to produce commercial products. Our conclusion reiterates that the laboratories’ outputs related to commercial product development are “precursors to marketable goods” and that “whether they will achieve commercial application is not known.” We provide a more detailed discussion of all these comments and our response in appendixes VII through XVII. As agreed with your offices, we plan no further distribution of this report until 30 days from its date of issue, unless you publicly announce its contents earlier. We will then send copies to interested parties, and we will also make copies available to others upon request. If you have any questions or would like additional information, please call me at (202) 512-3092. Other major contributors to this report are listed in appendix XVIII. The descriptions of the national laboratories are adapted from the 5-year institutional plans that the laboratories update and issue annually and from U.S. Department of Energy, Multiprogram Laboratories, 1979 to 1988, A Decade of Change (Washington, D.C.: Apr. 1990). Argonne was established in 1946. The University of Chicago operates the laboratory, which develops and operates national facilities for use by university, industry, and national laboratory groups; performs basic research, technology-directed research and technology evaluations; and conducts technology transfer through cooperative research, and development agreements, sponsored research, staff exchanges, and licensing of intellectual property or through the formation of new firms by the laboratory’s Arch Development Corporation. The laboratory’s basic research effort includes experimental and theoretical research on fundamental problems in the physical, life, and environmental sciences to advance scientific understanding and support energy technology development. Argonne’s technology-directed research includes conceptualization, design, and testing of advanced fission reactors and other technologies for power applications in both the civilian and defense sectors and investigations of strategies for overcoming materials, chemical, and electrochemical barriers to the development of these technologies. Argonne also supports DOE and, where appropriate, other federal agencies in characterizing and evaluating nationally important projects and technology options in terms of their environmental cost or other implications. Lawrence Berkeley, founded in 1931 as the Radiation Laboratory by Ernest Orlando Lawrence of the University of California at Berkeley, was one of the original national laboratories. It was funded under government contract in 1942. The University of California, which operates the laboratory, renamed it the Lawrence Radiation Laboratory after his death in 1958, and later called it Lawrence Berkeley. The laboratory conducts a wide range of interdisciplinary research with core competencies in biosciences and biotechnology; particle and photon beams; advanced detector systems; characterization and synthesis of materials; chemical dynamics, catalysis, and surface sciences; advanced techniques for energy supply and energy efficiency; and environmental assessment and remediation. It performs research in the energy, physical, and life sciences; develops and operates national experimental facilities; fosters industry’s interactions with the laboratory’s research programs; and offers scientific and engineering education programs. The laboratory’s work in the energy sciences includes applied science, such as the energy efficiency of buildings; chemical sciences, such as the structure and reactivity of transient species; earth sciences, including geophysical imaging methods, isotopic geochemistry and physicochemical process investigation; and materials sciences, such as advanced ceramic, metallic, and polymeric materials for electronic, magnetic, catalytic, and structural applications. Accelerator and fusion research, nuclear science, and physics are pursued in the general science area. Lawrence Berkeley’s work in the life sciences includes cellular and molecular biology, chemical biodynamics, and research medicine and radiation biophysics. This work is supported by the laboratory’s scientific and technical resources in the areas of engineering, information and computing sciences, and occupational health. Oak Ridge was one of the original national laboratories. Now operated by Martin Marietta Energy Systems, Oak Ridge was established in 1943. The laboratory’s R&D activities are focused on basic and applied research, technology development, and other technological challenges in areas that include energy production and conservation technologies; experimental and theoretical research in physical, chemical, materials, computational, biomedical, earth, environmental, and social sciences; the design, building, and operation of unique research facilities for the benefit of university, industrial, and other federal agency and national laboratory researchers; and the development of environmental protection and waste management technologies. Oak Ridge also performs technology transfer and offers educational services from the preschool through the postdoctoral level. Pacific Northwest was established in 1965. Battelle Memorial Institute now operates the laboratory, which performs scientific research and rapid technology development and deployment to meet national needs. Laboratory efforts include molecular science, hazardous waste characterization, global environmental studies, subsurface science, biological systems, technical support for environmental policies and procedures, federal infrastructure modernization, national security technology, energy-efficient methods, advanced analytical methods, materials research, magnetic fusion research, civilian nuclear waste management, technical support for nuclear power plant operation, space exploration technology, fossil fuel technology, renewable energy sources, energy policy analysis, and surveillance and oversight of operations at its Hanford site. This laboratory was established in 1949. Three contractors operated the laboratory during the time period of our study: Westinghouse Idaho Nuclear Co., Rockwell-INEL, and EG&G Idaho. The laboratory’s areas of primary emphasis are nuclear reactor technology R&D, defense production-related support, waste management and environmental restoration analysis, advanced energy production technology development, and research and development on energy and environmental issues, including performance testing of industry-developed electric vehicles, small hydropower and geothermal power production, and fossil energy research. Idaho also offers educational activities and performs technology transfer. Lawrence Livermore was established in 1952. The University of California operates the laboratory, which serves as a national resource in science and engineering, focused on national security, energy, environment, biomedicine, economic competitiveness, and science and mathematics education, with a special responsibility for nuclear weapons. National security has traditionally been a special focus of the laboratory’s research and development effort. Lawrence Livermore’s major areas of activity have included research, development, and testing for all phases of the nuclear weapons life cycle; strategic defense research; arms control and treaty verification technology; inertial confinement fusion; atomic vapor laser isotope separation; magnetic fusion; other energy research; research in biological, ecological, atmospheric, and geophysical sciences; charged-particle beam and free-electron laser research; advanced laser and optical technology applications; technology transfer; and science education. The laboratory also participates in human genome research as part of a nationally directed initiative. Los Alamos, one of the original national laboratories, was established in 1943 and is operated by the University of California. Ensuring the nation’s deterrence capability through nuclear weapons technology is the laboratory’s primary focus. Los Alamos’ major R&D activities include research, design, development, engineering, and testing of nuclear warheads; maintenance and enhancement of the weapons technology base and warhead stockpile management; research, development, and testing support for advanced nuclear directed-energy concepts; nuclear materials R&D for the nuclear weapons program; nonnuclear strategic defense R&D activities; advanced conventional munitions development and simulation; verification and safeguards R&D; vulnerability, lethality, effects, and countermeasures research; advanced defense technologies; intelligence activities involving hardware analysis and technology security; weapons and energy technology systems studies; and R&D in nonnuclear energy and technology areas. The laboratory’s basic research activities in defense and energy areas include atomic and molecular physics, bioscience, chemistry, computational science and applied mathematics, geoscience, space science, astrophysics, materials science, nuclear and particle physics, plasma physics, fluids, and particle beams. Los Alamos also performs technology transfer and offers science and engineering education programs. Sandia was established in 1949 under an agreement with AT&T to operate the laboratory for the government as a public service on a nonprofit basis. AT&T stepped out of this role in 1993. A contract was recently awarded to Martin Marietta Corporation to operate the laboratories. Sandia’s major areas of effort are nuclear weapons, arms control and treaty verification, environmental restoration and waste management, energy supply and conservation, advanced conventional military technologies, and other programs in the national interest. The laboratories’ R&D activities in these areas include research, development, and engineering associated with advancing nuclear explosives to integrated, functional weapons for Department of Defense weapon delivery systems; other defense programs, including development of verification and control technologies to support arms reduction and concepts and systems for the safeguarding and security of nuclear materials; research, development, and engineering for hazardous waste removal, minimization, and remediation; and nonnuclear energy research in energy efficiency, recovery techniques, conversion technologies, alternative energy sources, characterization of environmental change phenomena, environmental restoration technologies, and basic energy sciences. Sandia also conducts technology transfer and offers mathematics and science education opportunities. Brookhaven was established in 1947 by a group of nine universities to facilitate their mutual access to large-scale research facilities, particularly in nuclear science. The laboratory is operated by Associated Universities, a corporation governed by a board of trustees representing the original nine universities as well as other universities, research institutions, and industrial organizations. Brookhaven’s primary role is to conceive, design, build, and operate large-scale, complex facilities for scientific research and to conduct basic and applied research in energy-related physical, life, and environmental sciences. When feasible, Brookhaven makes its laboratory facilities available to state and federal agencies, universities, and private industry. The laboratory’s major areas of R&D are high-energy and nuclear physics; basic energy sciences emphasizing research on biological, chemical, and physical phenomena underlying energy-related transfer, conversion, and storage systems; life sciences, nuclear medicine, and medical applications of nuclear techniques; and a broad span of applied programs that draw on the laboratory’s unique capabilities. Brookhaven makes all useful results and knowledge obtained from its research activities available to private industry. Brookhaven also performs technology transfer and offers science and engineering education programs. The former Solar Energy Research Institute was designated a DOE national laboratory in 1991 and renamed the National Renewable Energy Laboratory. The focus of the laboratory’s effort is on developing competitive renewable energy and related technologies and facilitating their commercialization. The laboratory’s R&D activities include basic and applied research, exploratory and advanced development and other activities in renewable energy and related technologies; analytic studies and technology evaluations; and collaborative R&D with universities and industry. The laboratory also manages subcontracted R&D on behalf of DOE and serves as a source of scientific and technical information on renewable energy. Please return the completed questionnaire in the enclosed envelope within 10 working days of receipt. the event that the enclosed envelope is misplaced, please mail the questionnaire to: Nancy Briggs, Ph.D Project Manager U.S. General Accounting Office Program Evaluation and Methodology Division Room 5853 441 G Street St., N.W. Washington, DCThank you in advance for your cooperation and assistance in addressing an issue of such critical importance to the nation.We will send a report on the analysis of the information to the Congress and you. I-2 The definitions listed below are included to provide a common frame of reference for responding to the survey. Applied Research: meet a recognized need. Basic Research: contribute to the knowledge base in the field of investigation. Capital Equipment Budget: and general purpose equipment. Cooperative Research and Development Agreement (CRDA): of fostering technology transfer from the federal domain to the private sector. Development: including the design and development of prototypes and processes. Research directed toward the production of useful materials, devices, systems, or methods, Facility: building or defined structure, some area within a structure, or a defined area not confined to a structure (for example, a testing area). An entity that comprises the equipment used for research programs. Laboratory: perform research and development. testing areas), buildings, human resources, research programs, and equipment. A group of facilities owned, leased, or otherwise used by the U.S. Department of Energy to A laboratory consists of land (including, but not limited to, remote The primary scientific and technical research programs that a laboratory pursues. Operating Budget: including salaries and wages, expendables and overhead. For a national laboratory this comprises research and development program costs, Research and Development: understanding of the subject under investigation; practical use of knowledge; or the production of materials, devices, systems, or methods.Research and development includes basic research, applied research, and development. Intensive, systematic study directed toward fuller scientific knowledge or One of several broad areas of research activity within a laboratory’s mission. Technology: research and development process. Devices, processes, "know how," or scientific and technical information produced through the Technology Transfer: technical information produced in a federal laboratory by universities, private industry, or government agencies, whether national (federal, state, or local) or foreign. Total Budget: costs. For a national laboratory, this includes operating, capital equipment, and construction User Facility: reimbursed basis by investigators from private industry, academic institutions, or state and local government agencies. A federal laboratory facility available for use either free of charge or on a cost- I-3 1. operating budget during fiscal year 1992 and which is on-going for fiscal year 1993. Please identify the defense subprogram for which you are reporting. Report only for a subprogram funded by the laboratory’s (Check only one.) GB-02 Inertial Confinement Fusion (Guidance) GB-02 Inertial Confinement Fusion (Required) GC Verification and Control Total Other (Please specify) U.S. Department of Energy Budget and Reporting and System code. I-4 Please describe the subprogram’s research objectives and the major scientific and technical areas the subprogram addresses. What is the time period of performance for this subprogram? (Write in the subprogram start date and end-date below). (month) (Day) (Year) (month) (Day) (Year) I-6 4. objectives? How important, if at all, is each of the research and development-related activities listed below to your subprogram (Check the relevant space.) (1) (2) (3) (4) (5) (6) f. private firms or industrial organizations g. laboratory to U.S. government organizations h. laboratory to foreign government organizations i. Transfer technology from this laboratory to U.S.firms or industrial organizations j. laboratory to foreignfirms or industrial organizations Other (Please specify) Basic Research: knowledge base in the field of investigation. A study undertaken primarily to gain fuller knowledge or understanding of a subject and to contribute to the A study directed toward the practical use of knowledge or understanding of a subject to meet a recognized need. Development: development of prototypes and processes. Research directed toward the production of useful materials, devices, systems, or methods, including the design and I-7 5. 1989-92 for each of the research and development-related activities listed below? "N/A" if the program was not in operation in given year.) In your estimation, what percent, if any, of your total research subprogram budget was expended each year during fiscal years Write(Annual total should equal 100 percent. d. assistance to government agencies f. assistance to private firms or industrial organizations g. this laboratory to U.S. government organizations h. this laboratory to foreign government organizations i. this laboratory to U.S.firms or industrial organizations j. this laboratory to foreignfirms or industrial organizations Other (Please specify) Basic Research: knowledge base in the field of investigation. A study undertaken primarily to gain fuller knowledge or understanding of a subject and to contribute to the A study directed toward the practical use of knowledge or understanding of a subject to meet a recognized need. Development: development of prototypes and processes. Research directed toward the production of useful materials, devices, systems, or methods, including the design and I-8 6. 1989-92, which of these technologies did the research of this subprogram support? research subprograms budget was spent each year for research about these technologies? estimated percentage for activities checked.) We identified the technologies listed below using the Report of the National Critical Technologies Panel. During fiscal years In your estimation what percentage of the Write in the(Check all that apply. Pollution minimization, remediation, and waste management Other (Please specify) I-10 7. 1989 to 1992? How many of the research and development-related outputs listed below did your subprogram produce each year from fiscal year Write "N/A" if the program was not in operation in a given year.) Technical and scientific reports/monographs for internal use only Technical and scientific reports/monographs for release to others outside the laboratory Papers for presentation at professional conferences Other (Please specify) I-11 8. dollars in millions. How much royalty income was earned each year during fiscal years 1989-92 by technologies supported by this subprogram? (Write "N/A" if the program was not in operation in a given year.) I-12 10. product development (Check only one). During what time period, if at all, does this research subprogram have the potential for industrial application or commercial Immediate future (Less than 5 years) Short-term future (Next 5-10 years) Long-term future (Over 10-20 years) Very long-term future (Over 20 years) I-13 Is any university or private industry working in cooperation with your research subprogram? If yes, please provide the following information: Research topic of cooperative effort: ____________________________________________________________ Please also indicate if this is a U.S. or foreign government, university or firm. (Check the relevant space.) (1) (2) (3) If there is more than one organization working in cooperation with your subprogram, please photocopy this page and provide this information for all organizations. I-14 12. that in your view are working on scientific and technical problems similar to those addressed by this research subprogram. Contact person’s name and telephone number a. a. b. b. c. c. a. a. b. b. c. c. a. a. b. b. c. c. I-15 13. research institutions who are familiar with the scientific and technical aspects of your research subprogram as well as the facilities and equipment that support it. Please do not list anyone who is affiliated with programs in the U.S. Department of Energy. I-16 (Dollars in millions) 14. budget during fiscal years 1989-92?Show dollars in millions. (Write the dollar amount in each column. Write "N/A" if the subprogram was not in operation in a given year.) What was your research subprogram’s total annual a. b. Other (Please specify) 15. fiscal years 1989-92? (Write "N/A" if the subprogram was not in operation in a given year. percent.) I-17 15. following sources during fiscal years 1989-92 (Annual total should equal 100 percent.) (Continued) What percentage of the research subprogram’s total annual budget was contributed by each of the I-18 16. How many workers are employed annually by your research subprogram in each of the following job categories and what is the total employment?Please provide both the number of full-time personnel and the number of full-time equivalent (FTE) staff years for each year during fiscal years 1989-92. a. involved in research) b. Scientists, engineers, and other researchers (including research administrators directly involved in research) c. Technicians supporting research (through testing, inspection, maintenance, or construction of research equipment, computer programming) d. Clerical maintenance and other support personnel e. Other (Please specify) The research program survey population was enumerated by applying selection criteria to each laboratory’s research programs. After processing the surveys, we sent the laboratories a letter requesting confirmation that our list of research programs and subprograms was complete. In response to our letter, the laboratories confirmed a total of 252 research subprograms. The laboratories returned a total of 247 data collection instruments, for a survey response rate of 98 percent. The data contained in this report are results of analyses of national laboratory program managers’ responses to questions 5, 6, 7, 9, and 10 in part I of the national laboratory inventory. These responses represent program managers’ judgments or self-reports about question elements, as follows. We made no attempt to validate these responses through independent sources. Responses are research program managers’ best estimates of the proportion of the total program budget expended for each R&D-related activity. Although they had our definitions for key R&D-related activities listed in the data collection instrument, their responses also may reflect their own understanding of terms such as basic research, applied research, or technical assistance. Responses are research program managers’ best estimates of the proportion of the total program budget expended for research in critical technology areas. The response categories in the question are the critical technologies identified by the National Critical Technologies Panel. Some overlap may exist among these categories because they were not identified for research measurement purposes. The Panel’s critical technology categories were used in this question to determine the congruence between research already being conducted at the national laboratories and the research needs articulated by a congressionally mandated body. A few responses submitted for this question summed to more than 100 percent. These responses were prorated to include them in the calculation of mean percent expenditures for R&D in critical technologies. Responses are research program managers’ reports of research program outputs. The responses concerning commercial products and commercial processes are judgments made about research outputs that have reached only the precompetitive stage of the R&D process. Responses are research program managers’ reports about CRADAs in effect through the end of fiscal year 1992. Responses are research program managers’ judgments about potential industrial application or commercial product development for outputs of their research program over a 20-year planning horizon. The size of research subprograms in the study population varied; thus, managers were considering outputs of one or more research activities in making their assessments. Data concerning the distribution of the national laboratories’ expenditures among R&D-related activities, by laboratory and by program, are presented below. Table IV.1: Mean Percent Expenditures for R&D-Related Activities Within LaboratoriesANL = Argonne National Laboratory LBL = Lawrence Berkeley Laboratory ORNL = Oak Ridge National Laboratory PNL = Pacific Northwest Laboratory INEL = Idaho National Engineering Laboratory LLNL = Lawrence Livermore National Laboratory LANL = Los Alamos National Laboratory SNL = Sandia National Laboratories BNL = Brookhaven National Laboratory NREL = National Renewable Energy Laboratory Subprogram expenditures for activities other than those listed above, such as training graduate students and postdoctoral fellows or safety procedures. ANL = Argonne National Laboratory LBL = Lawrence Berkeley Laboratory ORNL = Oak Ridge National Laboratory PNL = Pacific Northwest Laboratory INEL = Idaho National Engineering Laboratory LLNL = Lawrence Livermore National Laboratory LANL = Los Alamos National Laboratory SNL = Sandia National Laboratories BNL = Brookhaven National Laboratory NREL = National Renewable Energy Laboratory Subprogram expenditures for activities other than those listed above, such as training graduate students and postdoctoral fellows or safety procedures. ER = Energy Research CE = Conservation and Renewable Energy ES&H = Environment, Safety and Health NE = Nuclear Energy DP = Defense Programs NPR = New Production Reactors ERWM = Environmental Restoration and Waste Management FE = Fossil Energy CRWM = Civilian Radioactive Waste Management PPA = Policy Planning and Analysis INT = Intelligence WFO = Work for Others Subprogram expenditures for activities other than those listed above, such as training graduate students and postdoctoral fellows or safety procedures. ER = Energy Research CE = Conservation and Renewable Energy ES&H = Environment, Safety and Health NE = Nuclear Energy DP = Defense Programs NPR = New Production Reactors ERWM = Environmental Restoration and Waste Management FE = Fossil Energy CRWM = Civilian Radioactive Waste Management PPA = Policy Planning and Analysis INT = Intelligence WFO = Work for Others Subprogram expenditures for activities other than those listed above, such as training graduate students and postdoctoral fellows or safety procedures. Data concerning outputs of the national laboratories’ R&D-related activities, by laboratory and by program, are presented below. ANL = Argonne National Laboratory LBL = Lawrence Berkeley Laboratory ORNL = Oak Ridge National Laboratory PNL = Pacific Northwest Laboratory INEL = Idaho National Engineering Laboratory LLNL = Lawrence Livermore National Laboratory LANL = Los Alamos National Laboratory SNL = Sandia National Laboratories BNL = Brookhaven National Laboratory NREL = National Renewable Energy Laboratory Responses were not collected from Brookhaven National Laboratory and National Renewable Energy Laboratory. Research subprogram outputs other than those listed above, such as technical abstracts, workshops for laboratory users, and an electronic bulletin board service. (Table notes on next page) ER = Energy Research CE = Conservation and Renewable Energy ES&H = Environment, Safety and Health NE = Nuclear Energy DP = Defense Programs NPR = New Production Reactors ERWM = Environmental Restoration and Waste Management FE = Fossil Energy CRWM = Civilian Radioactive Waste Management PPA = Policy Planning and Analysis INT = Intelligence WFO = Work for Others Pacific Northwest Laboratory provided information about all outputs for the laboratory as a whole that are not included in the data presented here. Responses were not collected from Brookhaven National Laboratory and National Renewable Energy Laboratory. Research subprogram outputs other than those listed above, such as technical abstracts, workshops for laboratory users, and an electronic bulletin board service. Private firms or industrial organizations Private firms or industrial organizations ER = Energy Research CE = Conservation and Renewable Energy DP = Defense Programs WFO = Work for Others Fiscal year 1992. The number of outputs is shown only for research subprograms in the study population that were in operation all 4 years (fiscal years 1989-92). Data concerning the formation of cooperative research and development agreements, expenditures for R&D in critical technologies, and the views of national laboratory program managers on their programs’ potential for commercial product development are presented below. Responses on CRADA formation were not collected from National Renewable Energy Laboratory. Responses were not collected from National Renewable Energy Laboratory. (Table notes on next page) ANL = Argonne National Laboratory LBL = Lawrence Berkeley Laboratory ORNL = Oak Ridge National Laboratory PNL = Pacific Northwest Laboratory INEL = Idaho National Engineering Laboratory LLNL = Lawrence Livermore National Laboratory LANL = Los Alamos National Laboratory SNL = Sandia National Laboratories Responses were not collected from Brookhaven National and National Renewable Energy Laboratory. Subprogram expenditures for activities other than those listed above, such as robotics, special nuclear materials, environmental R&D, and detector technology. (continued) ER = Energy Research CE = Conservation and Renewable Energy ES&H = Environment, Safety and Health NE = Nuclear Energy DP = Defense Programs NPR = New Production Reactors ERWM = Environmental Restoration and Waste Management FE = Fossil Energy CRWM = Civilian Radioactive Waste Management PPA = Policy Planning and Analysis INT = Intelligence WFO = Work for Others Responses were not collected from Brookhaven National and National Renewable Energy Laboratory. Subprogram expenditures for activities other than those listed above, such as robotics, special nuclear materials, environmental R&D, and detector technology. Total exceeds 100 owing to rounding. The following are GAO’s comments on the September 14, 1994, letter from DOE. 1. The definitions for basic research, applied research, and development that our study employs are derived from a Congressional Budget Office study of the federal R&D enterprise. The definition of technology transfer is the one used in our study of the Technology Transfer Act of 1986 and the definition of technical assistance is based on expert opinion. Our analysis examined the laboratories’ effort in each type of activity separately, and grouped, in two major categories, in order to address the study objective: to provide an empirical base for examining the extent to which the laboratories are engaged in basic and applied research or research related to commercial product development. Figure 1 and tables IV.1-IV.4 allow the reader to view our findings in both the two major categories and as separate R&D-related activities. The finding for each major category presented in figure 1 is the sum of the findings for the corresponding separate R&D-related activities presented in the last column of table IV.1. DOE disagrees with the category we established for “research related to commercial product development”—that is, that development, technical assistance, and technology transfer are all laboratory activities related to commercial product development—but does not question our definitions or findings for each separate activity. We agree that DOE may decline to accept our definition for research related to commercial product development, but we do not agree that our finding for the sum of the three separate activities is erroneous. This finding is based on laboratory research managers’ estimates of the distribution of their subprograms’ expenditures that were collected, verified, and analyzed according to generally accepted government auditing standards. We consider these estimates, made by research managers who are closely involved with the R&D, more accurate than estimates that may be obtained by other methods. 2. The analyses we produced were intended to establish baseline data for addressing empirical questions underlying the public debate, rather than to serve as a comprehensive analysis of the laboratories’ roles. To address the study objective, we focused on the 10 laboratories as a set of institutions, on comparing the distribution of expenditures for five types of R&D-related activities both within and among the 10 laboratories, on the nature and scope of their outputs, and on their potential for working with industry to bring commercial products to market. Given this approach, with the exception of expenditures for critical technologies and collaboration with industry, which we do examine, the other factors DOE suggests for analysis are beyond the scope of this study. However, we anticipate that our study might stimulate another party to undertake the type of institutional, comparative analysis that DOE suggests. 3. We agree with DOE that the laboratories collaborate in R&D with industry partners who then perform the additional testing and research activities required for commercial application. We also agree that “it typically should take years from the conclusion of a CRADA and the transfer of a technology to a partner, to the commercialization of a product.” The explicit definitions of terms and the discussion of CRADAs in the report make this clear. (See pp. 6, 12, and 15.) However, we disagree that the report attributes commercial product development work to the national laboratories. 4. We state in the section on Methodology that we began our work with a survey of the laboratories’ R&D activities because we could find no sufficiently comprehensive (emphasis added) existing documentation. To confirm that we had not overlooked an important information source when we designed and implemented our data collection strategy, we made inquiries about DOE’s institutional plan and research and development databases. We found that DOE headquarters maintains only the institutional plan database and that it includes only one of the data items, research program budget, that we used in our report. This budget information was available for fiscal years 1989-91 when we implemented our survey but would not have been useful for our analyses because it is not compiled at the same level of detail as our data. We also found that the research and development database is not one of DOE’s databases. It is being developed by the Critical Technologies Institute for the Office of Science and Technology Policy (OSTP) in the Executive Office of the President. When it is complete, it will have five data items analogous to our data. However, this database was not available when we developed our national laboratory inventory and is not now available to users other than OSTP. Forty-one of the items in our report are not included in either the institutional plan or research and development databases. The Laboratory Management Division in DOE’s Office of Energy Research maintains the institutional plan database. It has research program budget data for fiscal years 1979 to the present at the program level for 9 of the 10 national laboratories, and it has subprogram budget data for selected programs, such as energy research, defense programs, civilian radioactive waste management, and work for others. Because they are incomplete at the subprogram level, these data would not have been useful for our R&D-related activities and critical technologies analyses, which required budget data for all subprograms in our sample. Further, none of the budget data for the National Renewable Energy Laboratory are included in the institutional plan database. These data must be obtained from NREL’s hardcopy institutional plan, which is available from the Office of Energy Efficiency and Renewable Energy at headquarters. The Critical Technologies Institute’s research and development database will have information on laboratory expenditures for basic research, applied research, development, and technology transfer for research subprogram categories analogous, but not identical, to those we used, and on CRADAs—for the national laboratories as well as for the laboratories of several other federal agencies—when it is available to organizations other than OSTP. The Critical Technologies Institute representative to whom we spoke could not specify when the database will be available. However, the research and development database will not have information comparable to the 16 research subprogram outputs we collected from the laboratories nor on the proportion of subprogram expenditures for the 22 critical technologies and the proportion of expenditures for technical assistance. We are also aware that abstracts of CRADA agreements can be obtained through DOE headquarters from the Office of Scientific and Technical Information, which is based in Oak Ridge, Tennessee. However, we also found these data to be incomplete. In August 1992, we requested these data through DOE’s Office of Technology Utilization at headquarters and received 147 abstracts for the nine laboratories from which we collected CRADA information—49 fewer than the total the laboratories reported to us. Since the fiscal year was not then complete, we assumed that all CRADA information had not yet been reported to DOE or entered into the database. Our experience developing the survey frame, moreover, suggested that the laboratories’ institutional plan data needed modification to address our study requirements and that the information available from DOE was not consistent with information available from the laboratories. We used the list of research programs included in the institutional plans as a preliminary frame for part I of the survey. Recognizing that the laboratories are dynamic institutions, we asked each laboratory to confirm the list before survey implementation. Most of the laboratories made both deletions and additions to the list to meet our survey selection criteria. (See p. 4.) We used the lists of facilities reported in the DOE report, Capsule Review of DOE Research and Development Laboratories and Field Facilities, as a preliminary frame for part II of the survey. The laboratories made deletions and additions to these lists as well and in two cases almost completely replaced them. Changes of this magnitude confirmed the strategy of collecting data directly from the laboratories to address our study’s information requirements. 5. During the agency review of our draft report, two laboratories provided us with additional CRADA information, bringing the total number of CRADAs in effect among all programs in operation in any year from fiscal year 1989 to 1992 to 196. (See table VI.1.) This total is the number of CRADAs in effect in fiscal year 1992, rather than “now,” to which DOE refers and which we assume is fiscal year 1994. Moreover, we found a substantial increase in CRADA formation in fiscal year 1992, sponsored by DOE’s defense program technology transfer initiative at Sandia. (See tables VI.1 and VI.2.) It is possible that the increase we found persisted and included more laboratories, bringing the total to 1,000 in fiscal year 1994. However, such a change would not render our finding for fiscal year 1992 inaccurate. 6. Brookhaven brought it to our attention that the number of CRADAs formed is limited by the amount of money allocated to a laboratory and that this amount varies widely from laboratory to laboratory. We agree with Brookhaven that characterizing CRADA formation as the “strongest” indicator of a laboratory’s commercial product potential is misleading for this reason, and we have modified our discussion of CRADA findings. Scientific user facilities and personnel exchanges will be examined in a separate study. Licensing is described in the section on Principal Findings of this report. (See pp. 13-14.) CRADAs are cost-shared cooperative agreements targeted to a commercial innovation. 7. We treat laboratory outputs as measures of activity, not as measures of impact or productivity. (See pp. 7-8.) 8. We found that the 10 laboratories produced many more publications and reports (21,593) than they did outputs related to commercial product development (2,510) in fiscal year 1992. This is a statement of fact, tabulated from reports to us by the laboratories’ research managers. It describes the laboratories’ activity. It is not intended as a criticism of the research enterprise. 9. The purpose of this report was to examine the balance of R&D-related activity across the laboratories, rather than to examine the magnitude of the R&D investment. We used the proportion of funds expended for each type of R&D-related activity as a measure of activity, not as a measure of investment. (See pp. 6-7.) An examination of human resources and a comparison of DOE’s national laboratories to those of other agencies was beyond the scope of this study, given its focus on laboratory R&D-related activity. A representative of Argonne, Internal Audit, called us on July 5, 1994, to report that the laboratory had no substantive comments on the report draft. The following are GAO’s comments on the June 27, 1994, letter from Brookhaven National Laboratory. 1. We have added a statement to the report clarifying this difference. 2. We have evaluated the data Brookhaven submitted and, after making the appropriate changes, added it to the database. These data have been incorporated into the tables included in the report letter and appendixes. 3. We agree with Brookhaven’s evaluation of this response and have made the change they requested to the database and report tables. 4. We agree with Brookhaven and have modified the discussion of CRADA findings. 5. The information on CRADA formation Brookhaven submitted in the pilot version of the data collection instrument has been added to the database and the tables in appendix VI. We also have added the sentence Brookhaven suggests to appendix I. The following are GAO’s comments on the June 29, 1994, memorandum from DOE’s Idaho Operations Office. DOE’s Idaho Operations Office representative, who responded for Idaho, observed that the value in the “R&D Budget” column of table 2 for Idaho should be $275 million, rather than $98.7 million and that the Idaho Operations Office made this determination by applying DOE headquarters’ definitions for research programs to Idaho’s research programs. The list of Idaho research programs to which the Idaho Operations Office applied DOE headquarters’ definitions is unspecified. We disagree with this determination, because it violated the study methodology. 1. The R&D budgets of the 10 national laboratories in table 2 were not compared. 2. We coordinated data collection from the laboratories with DOE’s operations office representatives, but none of them participated in any of the technical activities involving survey implementation. Therefore, the Idaho Operations Office representative may not have been aware that GAO program selection criteria should have been employed to assess the “R&D Budget” column value for Idaho in table 2 to be consistent with the methodology employed for the other nine laboratories. The use of DOE headquarters’ definitions for research programs to make this determination would result in a list of subprograms that differs substantially from the one jointly developed by GAO and Idaho. Subprograms included in the survey population were identified by laboratory representatives who applied the selection criteria we specified (see p. 4) to a preliminary subprogram list we compiled from the institutional plans and sent to the laboratories. This approach was followed by Idaho’s representatives, who identified 10 subprograms. We reduced the number of Idaho subprograms to nine during the editing and coding process. The $98.7-million value in the “R&D Budget” column is the total of nine research subprogram budgets reported by Idaho program managers on part I of the national laboratory inventory data collection instrument. The following are GAO’s comments on the June 17, 1994, letter from Lawrence Berkeley Laboratory. Although Lawrence Berkeley agreed with the study’s analytic framework and with the need for studies of this type to inform congressional policymakers, the laboratory raised an issue about the relationship of the national laboratories’ role in commercial product development to the broader needs of industry or the nation, which was not addressed in the report. This omission warrants clarification. The relationship of the national laboratories’ role in commercial product development to the broader needs of industry is an issue being discussed in the public debate about the laboratories’ missions and structure, but one that falls outside of the study scope. The purpose of this study was to examine the extent to which the national laboratories are engaged in basic and applied research or research related to commercial product development. Scientific and technical infrastructure, which Lawrence Berkeley gives as an example of industry need, while important to the considerations of laboratory mission and structure that serve as the study’s policy context, was not addressed. It was our expectation that the findings of this study would serve as an empirical base for designing a study to address this and other institutional issues. marketing activity that accomplish commercial application. (See pp. 6, 12, and 15.) 2. This study was not designed as a broad assessment of the national laboratories’ roles, but to examine the balance of the laboratories’ R&D-related activities in two major areas: basic and applied research and research related to commercial product development. We looked at these activities with three types of measures, and our conclusions interpret our findings for each type. The conclusion focuses on research related to commercial product development because we found slightly more activity in this area. We amplified this conclusion with an interpretation of findings for the other two types of measures. A discussion of the noncommercial product output of nuclear weapons research was not relevant. 3. We have added a discussion of these limitations to the Methodology section. (See pp. 7-8.) The following are GAO’s comments on the July 6, 1994, letter from Lawrence Livermore National Laboratory. We have added the revised text describing Lawrence Livermore Laboratory to appendix I. The following are GAO’s comments on the June 23, 1994, letter from Los Alamos National Laboratory. We have added a footnote to the Background section discussing the legal division of Department of Defense and civilian responsibility for nuclear weapons research and development. We also expanded the phrase on page 2 from “weapons development” to “nuclear weapons research and development.” The following are GAO’s comments on the June 22, 1994, letter from National Renewable Energy Laboratory. 1. One future report will provide a descriptive statistical analysis of the technical and operating characteristics of the national laboratories’ major research facilities. Other topics are yet to be determined. 2. We have made this correction to the text. 3. Graphs and tables are presented in the section on Principal Findings. 4. The aggregation in figure 1 is intentional. The graph is designed to illustrate the balance between the two major areas of R&D-related activity we examined. The last column of table IV.1, labeled “All Labs,” presents percentages for development, technical assistance, and technology transfer for the 10 laboratories. 5. See table VI.3 in appendix VI. Table VI.4 presents these percentages by program area. 6. We have made this correction to the text. We did not receive Oak Ridge’s written comments from DOE. We did discuss Oak Ridge’s views with laboratory representatives by telephone on June 22 and July 13 and 19, 1994, and we spoke with a representative of DOE’s Oak Ridge Operations Office on July 7, 1994. We also received new output data for Oak Ridge’s subprograms by facsimile from representatives of both organizations. A summary of their comments and our response follows. Oak Ridge raised two general issues. One was the effect of the study sampling methodology on findings for the laboratory’s outputs related to commercial product development. Oak Ridge took the position that most of the laboratory’s outputs related to commercial product development were produced by subprograms not selected in the study sample and, consequently, expressed the concern that GAO’s findings for outputs related to commercial product development based on the sampled subprograms may not be representative because of this distribution of outputs among all laboratory subprograms. Most of these outputs, they explained, are produced by programs that fall below the $10-million threshold for inclusion in the survey. In fact, according to tabulations they had performed, the sampled programs, while representing 73 percent of the overall budget, produce only 7 percent of the outputs in question. Secondly, Oak Ridge thought that the report’s definitions and analyses equate development work with commercial product development and that the conclusion based on this definition is not supported by the data. We address these issues separately. First, Oak Ridge actually had identified two sources of potential underreporting: (1) data for outputs of sampled subprograms that were not available at the time of the survey and (2) data for outputs of unsampled programs. We agreed that additional data for sampled subprograms should be added to findings for Oak Ridge. We requested and received from Oak Ridge the new data for the sampled subprograms, and we added them to our database and report tables. matter of some concern to us, particularly in light of the large number of outputs Oak Ridge ascribed to the unsampled programs. Second, we disagreed that the report definitions and analyses equate development work with commercial product development. Our definitions, analyses, and conclusions make it clear that the laboratories were not expected to produce commercial products. We defined development as having “some type of product as the output goal (emphasis added),” but concluding “with a prototype rather than a usable good.” Further, we point out that “Additional time, research, and testing are required to convert the prototype to a weapon or commercially viable product.” The definitions of outputs related to commercial product development, including those for precompetitive commercial products and processes, state that these outputs tend (emphasis added) to arise from development work, but that “they will require a substantial additional investment before they are ready to market.” The conclusion, moreover, reiterates that these outputs are “precursors to marketable goods,” and that, for this reason, “it is too early to determine whether this activity will produce technologies with commercial uses.” We also examined the assumption that R&D is a linear process, with all commercial product-related outputs arising from development, and found that our data did not support it. We included this segment of the analysis to emphasize the uncertainty associated with current understanding of the operation of the R&D process, and the origin of technologies with commercial potential. The conclusion we reached concerning the uncertain prospects of the laboratories’ commercial product-related outputs is an interpretation of this finding as well as our definitions for outputs related to commercial product development. A representative of Pacific Northwest, called us on June 22, 1994, to comment on the draft report by telephone. A summary of the laboratory’s comments is included in our response, which follows. Pacific Northwest offered one general comment and several comments and questions about specific items in the text. We address the general comment first and then the specific comments. Pacific Northwest suggested that a section be added to the report describing the major commercial product-related initiatives the national laboratories have undertaken since the end of fiscal year 1992. Partnership for a New Generation of Vehicles (PNGV) and American Textile Partnership (AMTEX), two consortia for R&D targeted on commercial applications in which several laboratories are participating, were mentioned as examples. We are aware that the laboratories have been active in technology transfer activities of many types since the end of fiscal year 1992. This activity will be captured in any follow-up study that is performed in the next few years to determine if progress has been made since fiscal years 1989-92, the time period measured in this report. 1. Pacific Northwest thought that the word “primarily” in the sentence beginning on draft line 10, page 4 (now line 12, p. 3), should be deleted because it implies that the laboratories have only one primary mission. We have modified this sentence. 2. Pacific Northwest said that the output data in table 5 (now table V.1) not reported for the laboratory are available and will be submitted to us. We received and reviewed the data, and we added it to table V.1. 3. Pacific Northwest said that information on CRADA formation for the laboratory as a whole was submitted to us during survey implementation. We confirmed that this information had been received and added it to table VI.1. The following are GAO’s comments on the July 1, 1994, letter from Sandia National Laboratories. Sandia agreed with the report’s objective, methodology, and conclusion, but made two general comments. First, Sandia suggested that the report include a description of the national laboratories’ expanded efforts in technology transfer during fiscal years 1993-94. Second, Sandia suggested that we review the substantial variation in the percentage of laboratory funds not expended for critical technologies reported for Lawrence Livermore, Los Alamos, and Sandia in table VI.3. Sandia expected this percentage to be very similar for all three laboratories. We are aware that the national laboratories have been active in technology transfer activities of many types during fiscal years 1993-94, including participation in large-scale R&D consortia such as PNGV and AMTEX. These activities will be captured in any follow-up study that is performed during the next few years to determine if progress has been made since fiscal years 1989-92, the time period measured in this report. We reviewed all responses by Lawrence Livermore, Los Alamos, and Sandia concerning percent of expenditures for critical technologies and funds not expended for R&D in these areas. We found that Lawrence Livermore program managers allocated a percentage of funds expended to the “other” category to a much greater extent than did program managers at Sandia or Los Alamos. We also found considerable variation among all laboratories in the proportion of expenditures allocated to this category. R&D activities specified in the “other” category included items such as robotics, special nuclear materials, environmental R&D, and detector technology. Allocations to this category, and to the energy technologies category, accounted for most of the difference in proportion of funds not expended for critical technologies by Lawrence Livermore, Los Alamos, and Sandia. Miguel A. Lujan, Project Adviser The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the extent that the Department of Energy's national laboratories are engaged in basic and applied research or in research related to commercial product development. GAO found that: (1) the laboratories devoted more than half of their research and development (R&D) funds to commercial product development during fiscal year (FY) 1992; (2) most of the laboratories' development work was devoted to defense research; (3) less than half of the laboratories' resources were spent on basic and applied research in FY 1992; (4) the laboratories produced 21,593 publications and reports and 2,510 products related to commercial product development in FY 1992; (5) publications and reports are the primary mechanism for disseminating the results of R&D activities; (6) although the potential exists for the laboratories to develop commercial product-related outputs, it is unknown whether the laboratories will achieve commercial applications for their outputs because they are still several years away from market entry; (7) cooperative R&D agreements between the laboratories and industry are increasing, with 17 agreements in FY 1989 and 196 agreements in FY 1992; (8) 74.1 percent of the laboratories' R&D expenditures in FY 1992 were for technologies identified as vital to national needs; (9) Congress has formed a panel to identify critical technologies essential for the nation's long-term security; and (10) over half of the commercial product development program managers expect clear evidence of the potential for commercial product development to emerge by FY 1997. |
To carry out its mission, DOE relies on contractors for the management, operation, maintenance, and support of its facilities. Since the end of the Cold War, DOE’s employees’ skill requirements have shifted because the mission at its defense nuclear facilities has expanded from focusing primarily on weapons production to also focusing on cleanup and environmental restoration. In addition, DOE facilities have had to reduce their workforce in response to overall cuts in the federal budget. At the end of fiscal year 1998, total employment by contractors at both DOE defense and nondefense facilities was estimated at about 103,000, down from a high of nearly 149,000 since the beginning of fiscal year 1993. DOE plans to reduce its contractor workforce by another 4,000 employees by the end of fiscal year 2000, leaving it with 99,000 contractor employees. Section 3161 of the National Defense Authorization Act for Fiscal Year 1993 requires DOE to develop a plan for restructuring the workforce for a defense nuclear facility when there is a determination that a change in the workforce is necessary. These plans are to be developed in consultation with the appropriate national and local stakeholders, including labor, government, education, and community groups. The act stipulates, among other things, that changes in the workforce should be accomplished to minimize social and economic impacts and, when possible, should be accomplished through the use of retraining, early retirement, attrition, and other options to minimize layoffs; employees should, to the extent practicable, be retrained for work in environmental restoration and waste management activities, and if they are terminated, should be given preference in rehiring; and DOE should provide relocation assistance to transferred employees and should assist terminated employees in obtaining appropriate retraining, education, and reemployment. While the act refers only to defense nuclear facilities, the Secretary of Energy determined that, in the interest of fairness, the workforce restructuring planning process would be applied at both defense nuclear facilities and nondefense facilities. DOE’s Office of Worker and Community Transition is responsible for coordinating restructuring efforts, reviewing and approving workforce restructuring plans, and reporting on the status of the plans. For fiscal years 1994 through 1998, DOE obligated and spent about $1.033 billion to provide benefits to contractor workers and communities affected by its downsizing efforts. At the end of fiscal year 1998, DOE had not used all workforce restructuring funds, resulting in a carryover balance of $72 million. These funds included $10 million that was unobligated and $62 million that was obligated but not yet spent (called uncosted balances). The Office of Worker and Community Transition and other DOE programs each provided about half the total funding. Combined, these programs spent about $853 million on worker assistance, and the remaining $179 million went to community assistance. Of the $1.033 billion spent on worker and community assistance, about $460 million was provided by the Office of Worker and Community Transition. Roughly two-thirds ($311 million) of the $460 million funded assistance to separated DOE contractor employees. More than $227 million, or 73 percent, of the $311 million was spent on one-time separation payments and early retirement incentives. The remaining third ($148 million) assisted local community transition activities, such as new business development. Over the years, the amount of funds available for community assistance has grown. In fiscal year 1994, this assistance accounted for only 6 percent of the funds spent by the Office of Worker and Community Transition. However, by fiscal year 1998, this assistance had grown to 68 percent of funds the Office spent. Meanwhile, overall appropriations for this Office have been declining, from a high of $200 million in fiscal year 1994 to $61 million in fiscal year 1998. At the same time, most of the uncosted balances are attributed to community assistance. Of the $62 million in uncosted balances at the end of fiscal year 1998, almost $51 million was for community assistance. Over half of these balances are for communities surrounding two facilities—$14 million at Oak Ridge and $13 million at Savannah River. The remaining $573 million came from other DOE programs, such as defense and environmental management. According to the Office of Worker and Community Transition, about $542 million of this amount was spent on worker benefits, and the remaining $31 million was spent on community assistance. DOE provided separation benefits to about 88 percent of the 5,469 defense facility contractor employees separated during fiscal years 1997 and 1998. While DOE generally offered these employees a wide range of benefits, the value of the benefits varied because of differences in benefit packages among the sites and in the employees’ length of service and base pay. DOE offered its separated contractor workers severance packages that were relatively consistent with the types of public and private sector benefits we analyzed. Although, we did not compare the value of the benefits offered to DOE contractor employees with all of the benefits offered by the other public and private employers we reviewed, the benefit forumulas in some DOE workforce restructuring plans potentially allow more generous benefits than those offered for federal civilian employees. While the 1993 act focused benefits on defense facilities, DOE provided separation benefits to most of its separated contractor employees. Of the 5,469 contractor workers separated during fiscal years 1997 and 1998, 4,788 received separation benefits. According to DOE, the remaining 681 workers had relatively low seniority and were not eligible for benefits. DOE decided that in the interest of fairness, similar benefits should also apply to contractor workers separated at nondefense facilities. According to our analysis of 10 defense facility workforce restructuring plans for fiscal years 1997 and 1998, almost all plans offered the same types of benefits. While DOE guidance has been updated periodically, the criteria for separation benefits were derived primarily from the fiscal year 1993 legislation. DOE’s criteria require that workforce restructuring plans for each facility minimize impacts for all workers and recognize a “special responsibility” to Cold War workers. One of these criteria is to minimize layoffs through early retirement incentives, voluntary separations, and retraining. However, if layoffs are to occur, the restructuring plans are to provide for adequate notification and funding for education, relocation, and outplacement assistance. DOE criteria were not prescriptive and gave field offices substantial autonomy to determine benefit levels. These plans had to be approved by the Secretary of Energy. For fiscal years 1997 and 1998, we found that the 10 plans we reviewed offered the same types of benefits. Separation benefits were provided under three types of programs: enhanced retirement, voluntary separation, and involuntary separation. Enhanced retirement provided for full retirement benefits with fewer years of eligibility or service. One plan had provisions that enhanced workers’ eligibility by adding 3 years to both their age and years of service. Nine plans had some type of separation payment based on length of service and base pay for those employees voluntarily or involuntarily separated. All plans also included extended medical benefits, which require the contractor to pay its full share of a separated employee’s medical insurance payments for the first year after separation and half the contribution during the second year. In all plans, educational assistance was available, usually for up to 4 years after separation. All of the plans included outplacement assistance, some of which consisted of resume-writing workshops, job bulletin boards, and employment search strategies—many provided by an outside contractor. A hiring preference for involuntarily separated workers at other DOE contractors’ work sites was provided for in 8 of the 10 restructuring plans. The other two plans did not offer rehiring preference because they did not call for involuntarily separating any contractor workers. Eight plans included relocation assistance. While DOE generally offered its separated contractor employees the same types of benefits, the value of these benefits varied because of the differences in the packages among sites and employees’ length of service and base pay (which reflects employee job and skill level). For example, in fiscal year 1997, the restructuring plan for the Portsmouth Gaseous Diffusion Plant in Ohio (which covered facilities in both Portsmouth and Paducah, Kentucky) based voluntary separation pay on years of service, with a limit of $25,000 per worker. Lawrence Livermore National Laboratory in California based its voluntary separation pay on years of service, with employees receiving 2 weeks’ pay for each year of service, subject to a limit of 52 weeks. With the 52-week limit on separation payments, Lawrence Livermore’s average voluntary separation payment of $43,939 exceeded Portsmouth’s cap of $25,000. Table 1 identifies the lowest and highest average benefit amount offered separated contractor workers at defense nuclear sites for fiscal year 1998. For example, the lowest average voluntary separation benefit was $5,523 (at the Fernald facility in Ohio) and the highest was $64,907 (at Sandia National Laboratory in New Mexico). The table also identifies the number of separated workers receiving benefits among DOE’s defense facilities and the average cost of these benefits. For example, 748 employees at eight sites received voluntary separation payments that averaged $23,659 per worker. DOE generally offered its separated contractor workers benefits that were similar to those offered in public and private sector severance packages—such as education assistance and preference in rehiring. However, some of DOE’s voluntary separation benefits were greater than those offered federal employees. For example, the formula for extended medical coverage and the provisions for relocation assistance offered by DOE were more generous than the benefits offered to separated federal civilian employees. Table 2 shows the types of benefits generally offered and compares these generic benefits with the benefits offered in DOE’s workforce restructuring plans, the plans offered by DOE contractors in the absence of DOE’s plans, and the plans offered by the military, the federal government to its civilian employees, DOD contractors, and 25 other public and private sector organizations, including DOE-provided information on a survey of private company benefits. We did not compare the value of the benefits offered to DOE contractor employees with all of the other benefit packages offered by the public and private employers we considered. However, table 2 shows that formulas in DOE’s workforce restructuring plans allow for potentially more generous benefits than offered in some of the other benefit plans highlighted in the table. For example, we noted that some of DOE’s workforce restructuring benefits had formulas that could provide more benefits than the amount separated federal civilian employees could expect to receive. Some of DOE’s benefit formulas would allow for larger severance payments than do federal civilian packages. Voluntarily separated federal civilian employees received a one-time severance payment of 1 week of annual salary per year for up to 10 years’ service and 2 weeks of salary per year for more than 10 years’ service; with an adjustment for age. This benefit was paid out in a lump sum and was capped at $25,000. In contrast, while half of the DOE defense workforce restructuring plans we reviewed for fiscal years 1997 and 1998 had caps based on weeks of pay, these caps could exceed $25,000, depending on a contractor worker’s base pay and years of service. As a result, seven workers who received voluntary separation payments at one DOE defense facility averaged $64,907 each in fiscal year 1998. Furthermore, 65 percent of the 748 employees voluntarily separated during fiscal year 1998 received an average separation payment of over $25,000. Among the DOE plans we reviewed, one plan offered enhanced retirement benefits that added years to a contractor worker’s age and eligibility to allow for early retirement without penalty and with a cash payment. While federal workers could retire early and receive a separation payment, they were not given added years of age or eligibility and their annuity amount was reduced. In addition, the formula for extended medical coverage and the provisions for relocation assistance offered by DOE were more generous than the benefits offered to separated federal civilian employees. For extended medical coverage for eligible contractor workers, DOE pays the full employer cost for the first year of separation and about half of that cost in the second year. Separated federal workers who are eligible and wish to retain extended medical coverage must pay the full cost, plus an administrative fee, for the coverage upon separation. The use of DOE’s criteria does not result in the most assistance going to the communities most affected by DOE’s downsizing or those with the highest rate of unemployment. Several communities with low unemployment rates and comparatively fewer DOE job losses received more funds than did communities that had higher rates of unemployment and lost more DOE jobs. Unlike DOE’s criteria, the criteria used by the Department of Commerce’s Economic Development Administration (EDA) include specific provisions for determining the distribution of economic assistance on the basis of local unemployment and job loss. In applying EDA’s criteria to the eight communities that received DOE assistance, we found that only four would have received funds at the time of the decision. Furthermore, because most DOE assistance went to communities with relatively strong economies, the extent to which DOE’s assistance aided in the creation or retention of jobs is not clear. DOE’s criteria does not result in the most assistance going to the communities most affected by the Department’s downsizing. DOE’s community assistance guidance has evolved since the program’s inception in 1993. DOE’s February 1997 Policy and Planning Guidance for Community Transition Activities refined the Department’s criteria for evaluating all project and program funding requests in community transition plans. DOE requires communities requesting funds to submit plans describing the impact of the Department’s downsizing. These plans “may be based upon community needs and may incorporate an analysis of the socio-economic strengths, weaknesses, opportunities, and threats.” In developing their plans, communities are asked to identify the primary and secondary economic impacts likely to result from DOE’s downsizing. Communities are instructed to use local information sources to establish a baseline of primary impacts and project factors, such as net job loss, changes in unemployment, loss of wages and disposable income, and business closings. In addition, communities should identify secondary impacts, such as decreases in tax revenues and property values. Although DOE requires communities to develop plans that include economic impact, DOE focuses its review on the merits of a plan’s individual projects, not on a community’s relative economic need. DOE uses a number of written criteria to evaluate individual projects. These include the project’s ability to create at least one job for each $10,000 to $25,000 received and to provide jobs for separated DOE workers, induce investment or growth in the production of goods and services, and reduce the community’s dependency on DOE. In addition to DOE’s written guidance, the Director of the Office of Worker and Community Transition told us that DOE formally uses four criteria prior to submitting a recommendation to the Secretary: (1) economic distress measured by unemployment and the loss of income; (2) job loss relative to the size of the community affected as a measure of economic dependence on DOE; (3) the diversity of employment within a community and the impact of job loss on the economic base; and (4) the overall size of the workforce reduction. However, while the Director said that these are formal criteria, they are not published in the Department’s guidance nor are the communities evaluated against these four criteria in the memorandums sent to the Secretary for funding approval. After completing its review, DOE submits a community’s plan to EDA for its independent review. Under the National Defense Authorization Act of 1998, EDA is required to review and approve DOE’s community plans. However, rather than using its own criteria, EDA evaluates the community plans using DOE’s criteria, set out in DOE’s February 1997 guidance. Table 3 shows the relative disparity between DOE’s assistance to the affected communities and communities’ unemployment rates or job losses. For example, the communities surrounding Rocky Flats had an average unemployment rate of 3.3 percent for fiscal years 1995 through 1998, lost 2,922 contractor jobs, and received about $25 million in DOE assistance. In contrast, the communities surrounding Richland had more than twice the unemployment rate and nearly twice the job loss of Rocky Flats during this same time but received only about $18.5 million in community assistance. Applying EDA’s job loss and unemployment criteria to DOE’s community assistance funding decisions for fiscal years 1995 through 1998, we found that some communities that received assistance under DOE’s criteria would not be eligible under EDA’s criteria. EDA—which helps communities recover from the effects of job losses—has threshold criteria for its economic assistance that are based on job loss and unemployment. Under EDA’s regulations in effect during this period, communities in a standard metropolitan statistical area suffering from sudden and severe economic distress were eligible for EDA’s assistance if, among other things, they met one of the following tests: (1) the area’s unemployment rate was equal to or less than the national average and 1 percent of the employed population, or 8,000 jobs, were lost or (2) the area’s unemployment rate was greater than the national average and .5 percent of the employed population, or 4,000 jobs, were lost. While EDA’s internal guidance further stated that employees subject to DOE downsizing were eligible for assistance, this provision was not a legal requirement until February 1999. Using EDA’s criteria to assess DOE’s funding decisions for the eight communities that received assistance for the fiscal year 1995 through 1998 period and where comparable data were available, we found that nine of the 21 decisions (some communities had more than one funding decision), representing four of the eight communities, did not meet these criteria. Appendix IV shows this analysis. These nine decisions provided about $51 million to five communities surrounding the Mound, Pinellas, Nevada, Oak Ridge, and Rocky Flats facilities. The remaining 12 decisions provided about $57 million to the four other communities surrounding the Los Alamos, Nevada, Richland, and Savannah River facilities. In the Secretarial decision memorandums we reviewed, DOE justified awarding some of its funds on the basis of economic conditions at the county level and impacts on the economic diversity of the communities surrounding a facility, rather than on the standard metropolitan statistical areas. However, these criteria are not in DOE’s written guidance. Since 1993, jobs in the national economy have grown rapidly, bringing unemployment rates to their lowest levels in decades. Because of the strong national and local economies, DOE’s contribution to job growth was uncertain in communities that received its assistance. For example, table 3 shows that six of the eight communities (excluding communities surrounding the Fernald and Idaho facilities) that received community assistance had a local unemployment rate lower than the national average of 5.19 percent for the 1995 through 1998 period. As discussed in appendix III, defining DOE’s contribution to community job creation is difficult because job creation measurements have not differentiated between jobs that DOE created, those created by other assistance, or those created by the economy as a whole. While determining DOE’s contribution to overall job growth is difficult, comparing the number of jobs created in the local communities with the ones DOE reports it has created or retained provides a rough measure of DOE’s impact. In doing this comparison, we found that DOE’s contribution had a relatively small impact on the growth of jobs in three of the six communities surrounding nuclear defense facilities for which we had comparable data. For the six sites for which comparable data on local job creation were available, DOE was responsible for about 1.8 percent of the total jobs created. For example, although the overall economy in the Denver area surrounding the Rocky Flats facility created 170,367 jobs, DOE’s contribution to that growth was 1,191 jobs, or .7 percent. However, in Richland, DOE’s contribution appears to be more significant. At this location, DOE contributed to about 36.1 percent of the job growth. Table 4 compares the increase in the number of jobs created in local economies with the number of jobs that were created or retained by DOE’s community assistance program. While DOE estimated that it helped to create or retain 8,392 jobs in the communities surrounding the sites listed in table 4, it is difficult to directly link DOE’s community assistance to job creation and retention. To illustrate this point, the Director of DOE’s Office of Worker and Community Transition mentioned the difficulty in showing a direct relationship to job creation at the Bridgestone/Firestone, Inc. plant near Savannah River. Bridgestone/Firestone, Inc. is investing $435 million in a new tire facility that will eventually employ 800 workers. The company received assistance from DOE as well as from other government sources; however, without a strong national economy, it might not have expanded its tire production. DOE’s criteria for assessing community assistance requests focus on the merits of individual projects and not on a community’s relative economic need. This focus has resulted in some communities with relatively lower job losses or unemployment rates receiving more financial assistance than those with higher job losses or unemployment rates. The most effective and efficient use of federal resources would be to provide relatively more funding to those communities that have a greater need. Need-based criteria exist for DOE to use in developing an allocation formula that targets needs to these communities, such as that used by the Department of Commerce’s Economic Development Administration. Furthermore, if DOE believes that other factors, such as diversity of employment within a community, more accurately reflect the economic impact of DOE restructuring, then it needs to identify these factors in its criteria. In addition, DOE should demonstrate that these other factors document the best allocation of community assistance resources to those with the greatest economic need. In order to target financial assistance to those communities that need it the most, we recommend that the Secretary of Energy revise the Department’s criteria for administering community assistance so that aid is more focused on economic need. One way of doing this would be to develop community financial assistance criteria similar to those used by the Economic Development Administration in its existing guidance. These could include such factors as a community’s unemployment rate and the impact of federal job loss on the local economy. We sent a draft of this report to the Department of Energy for its review and comment. The Department stated that the draft report inaccurately portrayed its worker and community transition program because it contained numerous factual errors and inappropriate comparisons. First, the Department questioned our recommendation because it believes that the criteria it uses for providing community transition assistance are consistent with the statutory direction provided by the Congress and the regulations developed by the Department of Commerce. Furthermore, the Department said that it does consider economic need in awarding community assistance grants. We are not disputing the criteria’s conformance with statute or regulation. However, we believe that these criteria could be improved. While approval memorandums for individual projects discuss some of the affected communities’ economic conditions, DOE’s written criteria do not. For example, DOE’s March 18, 1998, memorandum allocating $4.5 million for fiscal year 1998 for assistance to communities surrounding the Department’s Portsmouth facility, found that a four-county area surrounding the facility experienced unemployment rates double the state’s average and that one in four people in this area lived in poverty. If DOE believes such county-level economic factors are important, then it needs to make these factors part of its written criteria for allocating community assistance. DOE should also demonstrate that these factors document the best allocation of community assistance resources to those with the greatest economic need. Therefore, we believe that DOE’s criteria could be improved by explicitly describing the economic factors it will consider in determining relative need when allocating funds among affected communities. Second, the Department said that the benefits it provides to separating contractor employees were consistent with the practices of other private and public organizations and are comparable in value. On the basis of additional information provided by DOE, we revised our report to show that the types of benefits offered were reasonably consistent with the practices of other private and public organizations. We did not compare the value of the benefits offered to DOE contractor employees with all the other benefit packages offered by the public and private employers we reviewed. However, some of the formulas in DOE’s workforce restructuring plans, such as those determining voluntary separation benefits and extended medical coverage, potentially allow for more generous benefits than offered in some of the other benefit plans we describe in the table. DOE’s comments and our evaluation of them are provided in appendix V. To determine the amount of funds DOE has obligated and expended in support of its worker and community assistance program for fiscal years 1994 through 1998, we reviewed budget records and talked to officials in DOE’s Office of Worker and Community Transition and the Office of the Chief Financial Officer. To determine who received benefits during fiscal year 1997 and 1998 and to compare the types of benefits with the benefit packages of other federal and private organizations, we reviewed program criteria and reports from the Office of Worker and Community Transition, federal laws, and Office of Personnel Management publications governing federal civilian and military benefits. In addition, we reviewed DOE’s workforce restructuring plans for nuclear defense facilities for fiscal years 1997 and 1998, GAO and DOE Inspector General reports, the National Defense Authorization Act of 1993, and other relevant legislation. We also discussed with DOE officials the benefits provided under their restructuring efforts. However, we did not attempt to compare the value of DOE’s benefits with the value of the benefits provided by other federal and private organizations. To examine the results of DOE’s criteria for determining which communities should receive assistance, we interviewed officials in DOE and the Department of Commerce’s Economic Development Administration. We also reviewed DOE’s policy, operating guidelines, and documentation of the approval process; the interagency agreement between DOE and Commerce; and individual communities’ transition plans. We obtained economic information from an online database containing Department of Labor and Department of Commerce statistics. We used these statistics in conjunction with the statistics provided in DOE’s Office of Worker and Community Transition annual reports for fiscal years 1993 through 1998. To describe the contractor workforce in terms of length of service for Cold War workers and non-Cold War workers, we used data that the Office of Worker and Community Transition requested from its contractors’ databases. This information identified those individuals who were separated during fiscal years 1997 and 1998 and those currently employed at defense facilities. To analyze the extent to which the methodology used in a 1998 consultant study can be relied upon to evaluate the number of jobs DOE created or retained through its worker and community assistance program, we reviewed the study and the consultant’s supporting workpapers. We also interviewed the consultant’s investigators. We did not independently verify the data provided by DOE, its contractors, or DOE’s consultant. The consultant verified a sample of DOE’s job creation data. Data on community assistance and job creation and retention are contained in DOE’s annual reports to the Congress on its workforce restructuring activities. We used Department of Labor data, which is commonly used, to estimate job growth in surrounding communities. We conducted this work in accordance with generally accepted government accounting standards from January 1999 through April 1999. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report for 30 days after the date of this letter. At that time, we will send copies of this report to Senator Ted Stevens, Chairman, and Senator Daniel Inouye, Ranking Minority Member, Subcommittee on Defense, Senate Committee on Appropriations; and Representative Jerry Lewis, Chairman, and Representative John Murtha, Ranking Minority Member, Subcommittee on Defense, House Committee on Appropriations. We will also make copies available to others on request. If you or your staff have any questions about this report, please call me at (202) 512-3841. Major contributors to this report were Jeffrey Heil, Tim Minelli, Robert Antonio, Greg Hanna, Kendall Pelling, and Sandy Joseph. As table I.1 shows, the Department of Energy (DOE) separated 5,469 defense nuclear workers during fiscal years 1997 and 1998, with Cold War workers—those workers hired on or before September 27, 1991—accounting for 4,094 of the separations and non-Cold War workers—those hired after September 27, 1991—accounting for 1,375 separations. For all separated workers, the overall average length of service was 8.6 years. Cold War workers averaged 14.6 years of service overall, ranging from an average of 8 to 26.5 years among the 13 sites. Non-Cold War workers averaged 2 years of service overall, ranging from an average of 1.1 to 4.9 years among the sites. The percentage of Cold War workers separated at individual sites ranged from 100 percent to 33 percent. DOE data show that contractor employees who were voluntarily separated had more years of service than those who were separated involuntarily in fiscal years 1997 and 1998. The Cold War workers who voluntarily separated had an average of 18 years of employment. The Cold War workers who were involuntarily separated had 10.5 years of employment. Overall, the non-Cold War workers separated averaged 2 years of employment. Non-Cold War workers who voluntarily separated averaged 3.4 years of employment, while those who were involuntarily separated averaged only 1.7 years of employment. Figure I.1 shows the lengths of service for these groups of workers. Figure I.2 shows that the number of involuntary separations has been increasing as a percentage of all separations. Between fiscal year 1995, when most of the restructuring actions took place, and fiscal year 1998, the percentage of involuntary separations increased from 27 percent to 56 percent. DOE reported that because the number of older, eligible individuals in the workforce has decreased, there is a trend toward a greater use of involuntary separations. In table II.1, DOE data show that the remaining 76,010 defense nuclear workers reflect roughly the same percentage of Cold War and non-Cold War workers as the recently separated workforce. The overall average length of service is 14 years, 16.7 years for Cold War workers and 4.4 years for non-Cold War workers. Individual site averages ranged from 12.6 to 20.2 years for Cold War workers and from 2.1 to 5 years for non-Cold War workers. At individual sites, the percentage of Cold War workers ranged from 33 percent to 91.3 percent. Under the National Defense Authorization Act for Fiscal Year 1998, the Secretary of Energy was required to have an independent auditing firm study the effects of DOE’s workforce restructuring plans. Booz-Allen & Hamilton, Inc., which was awarded the contract, issued its report on September 30, 1998. While the study’s methodology reasonably estimates the number of jobs that DOE “was helping” to create or retain, it is difficult to know the extent to which DOE should receive full credit for these jobs because the consultant was not asked to (1) measure the impact of other assistance in creating or retaining jobs or (2) analyze the extent to which a strong economy helped to produce these jobs. The consultant’s report, Study of the Effects of the Department of Energy’s Work Force Restructuring and Community Transition Plans and Programs, was based upon the consultant’s visits to affected DOE sites, related communities, and their new businesses. The consultant verified and/or estimated that about 22,000 jobs were created or retained in those communities. The act required that the study include an analysis of the number of jobs created by any employee retraining, education, and reemployment assistance and any community impact assistance provided in each workforce restructuring plan. However, the consultant used the category job retention because DOE collected information for jobs retained and one of the objectives of the act that originally authorized the worker transition program was, to the extent practicable, to retain workers in other jobs at the site to avoid layoffs. DOE defined created jobs as those that did not previously exist and retained jobs as those that held the existing work force in place and provided substitute employment for at-risk or displaced workers within a defined geographic area. The consultant’s report concluded that DOE had a positive impact on mitigating the social and economic impacts of the DOE transition by helping to create or retain more than 22,000 jobs. While this methodology provides reasonable results for the jobs created or retained, the consultant’s scope of work did not include an analysis of (1) the impact of other assistance in creating or retaining jobs and (2) the extent to which the strong economy helped to produce these jobs. First, the methodology did not include the impact of other assistance. Both the consultant and DOE acknowledged the difficulty in estimating job creation and retention for specific programs. Therefore, the consultant and DOE both used the qualifier that the Department’s program “was helping” to create or retain these jobs. The Director of DOE’s Office of Worker and Community Transition told us that it is difficult to directly link program stimulus to job creation and retention. To illustrate this point, Bridgestone/Firestone, Inc. is investing $435 million in a new tire facility that will eventually employ 800 workers near Savannah River. South Carolina, Aiken County, the Department of Commerce, and DOE are also contributing funding for infrastructure development in support of this facility. In this case, DOE, along with three other government entities, each helped to create these jobs. Second, another difficulty is separating DOE’s contribution to job creation from the effects of a strong economy. Since 1993, jobs in the national economy grew rapidly, bringing unemployment rates to their lowest levels in decades. While Bridgestone/Firestone, Inc. received government assistance, the company may not have been looking to expand its tire production capacity without a strong national economy in which to sell its tires. Furthermore, the local economy can be a significant factor in creating jobs. As discussed earlier in the report, table 4 shows the relatively small impact DOE had on job creation in some communities. Met overall EDA eligibility? Our comments on DOE’s two main assertions are summarized in the body of the report. In its comments, DOE asserted the following: The report draft did not accurately portray the Department’s Worker and Community Transition Program and contained numerous factual errors that, along with inappropriate comparisons, raises basic questions about the validity of the recommendation and major findings. The Department’s criteria are consistent with statutory direction and Department of Commerce regulations, and the benefits provided to separated employees were consistent with the practices of other private and public organizations. In this appendix, we address each of the comments made in the attachment to DOE’s letter. In addition, DOE provided us with additional detailed comments that elaborated on the points made in the attachment to its formal response. We used this supplemental material where appropriate to revise our report. 1. DOE challenges our recommendation for four reasons. First, the Department commented that its criteria for awarding community financial assistance are consistent with the congressionally mandated criteria of the Economic Development Administration Reform Act of 1998 and ensure that aid is focused on economic need. The act makes communities affected by DOE’s defense-related reductions eligible for the Economic Development Administration’s (EDA) assistance, regardless of the local unemployment rate, or the per capita income in the affected communities. The Department commented that its criteria are consistent with the act, but it appears that DOE’s claim to consistency is based on a provision of the act that allows communities affected by DOE’s defense-related funding reductions to qualify for assistance. However, the act was not effective until February 11, 1999. Furthermore, DOE’s guidance does not have any economic threshold criteria for determining affected communities’ need. Most other communities that suffer economic hardships not caused by defense-related funding reductions are required to meet economic threshold criteria, such as an unemployment rate above the national average. Second, DOE commented that EDA must approve each community proposal before funding is provided and that economic need criteria are a key factor in its approval process. While EDA assesses the economic condition of the DOE community applying for assistance, the degree of, or relative, economic need is not a criteria in determining funding levels. We noted in our draft report that DOE submits the community plans to EDA for independent review and approval. However, EDA reviews the community plans using DOE’s criteria for reviewing projects and programs, set out in DOE’s Policy and Planning Guidance for Community Transition Activities. These criteria address projected job creation from the project, the amount of local participation in the project, and the ability of the project to become self-sufficient, not whether the communities requesting assistance meet threshold economic need. Third, DOE notes that its Policy and Planning Guidance for Community Transition Activities contains explicit criteria for ensuring that economic assistance is provided to communities suffering economic hardship. Furthermore, DOE added that each Secretarial decision memorandum approving community assistance formally addresses the economic need fulfilled by the funding to be provided. As we noted in our draft report, DOE’s criteria focus on the merits of the community’s individual projects, such as projected job creation, and not on the community’s relative economic need. Our analysis shows that communities differ in their degree of economic strength, and DOE’s criteria for determining community assistance funding do not result in the most assistance going to the communities most in need. We do note that several Secretarial decision memorandums included a general discussion of economic conditions, including job losses, and loss of economic diversity. For example, the June 1997 decision for Rocky Flats stated, “Although unemployment in Colorado is comparatively low, new jobs are being created primarily in retail and service industries, not the high-wage manufacturing and engineering sectors. Wage growth is not keeping pace.” However, none of the memorandums we reviewed considered threshold criteria or relative economic need. Fourth, DOE notes that a 1998 independent audit found, “The principal criteria for providing assistance to DOE sites and adjacent communities was degree of need, driven by how many workers were impacted by the transition.” On the basis of our review of Secretarial memorandums, we concur that the primary consideration for determining assistance was that workers were separated. However, our analysis shows that there was no correlation between the actual number of workers separated and the amount of assistance provided to communities. 2. DOE reports in its table 1 that each community it provided with community assistance met at least one economic threshold criterion established “by the Congress for such assistance.” We disagree with DOE’s response on several points. First, DOE’s table 1 uses criteria that did not exist at the time the Department made its funding decisions. These congressionally-mandated criteria, which included the DOE special need criterion, were not effective until February 11, 1999. However, our analysis applies economic threshold criteria, such as those used by EDA, to show funding decisions based on relative economic need. We used the administration’s economic threshold criteria that were in existence during fiscal years 1995 through 1998, when the bulk of DOE’s community assistance money was allocated. When we applied these criteria, the communities surrounding the Los Alamos, Richland, Savannah River, and Nevada facilities (one the three decisions for the Nevada facility) met EDA’s criteria for economic need. Second, DOE’s analysis misapplies EDA’s economic threshold criteria in two ways. DOE’s comments applied EDA’s 1999 criteria to individual counties around their Los Alamos and Oak Ridge facilities. If the facility is located within a standard metropolitan statistical area, then that area should be used to determine eligibility. As noted in the report, EDA uses standard metropolitan statistical area data when determining funding eligibility for communities located in these statistical areas. By using the larger standard metropolitan statistical areas as provided for in EDA’s guidance, our analysis is more likely to reflect the total impact of separating workers in the communities surrounding those facilities. If DOE believes that the county-level analysis more accurately reflects the economic impact of its restructuring than does the use of metropolitan statistical areas, then it may want to consider using counties’ economic strength in its community assistance allocation criteria. Additionally, DOE’s comments use the unemployment rate only for the year in which the majority of the workforce restructuring occurred at each DOE facility and compares it with the average national unemployment rate for that year. This provides a comparison for only one year out of the six that community assistance programs have been in existence. As shown in appendix IV, if economic and DOE restructuring information are compared against the appropriate administration criteria for each funding decision made since the beginning of fiscal year 1995 (soon after the Office of Worker and Community Transition was created), only four sites (Richland, Los Alamos and Savannah River, and one allocation decision for the Nevada facility) would have been eligible for funds. 3. According to DOE’s comments, our table showing funding allocations to communities for the period 1995 through 1998 contained a basic factual error by including funds that were spent since the beginning of the program. The data contained in table 3 of our draft report were derived from community assistance allocation figures contained in the Office of Worker and Community Transition’s annual reports. Since the receipt of DOE’s comments, the Office provided us with figures for the 1995-98 period. Table 3 has been revised accordingly but still shows that communities with relatively low unemployment rates generally received more funds per worker than those with higher rates of unemployment. According to DOE, using data for comparable periods (1995 through 1998) yields starkly different results for total community assistance funding and funding per job lost. Even with the revised allocation figures, we disagree with DOE for two reasons. First, to support its assertion, DOE commented that its table shows that communities generally received between $5,000 and $10,000 per employee separated. However, DOE’s table shows a wide disparity in the range of community assistance per job lost—ranging from $949 to $14,601. Importantly, DOE’s table does not show the allocation amounts with the communities’ unemployment rates. For example, the communities surrounding the Mound facility had an overall unemployment rate of 4.13 percent for the 1995-98 period and received $10,302 in community assistance per separated worker. In contrast, while the communities surrounding the Richland facility, which had an unemployment rate of 7.92 percent, received only $3,098 per separated worker. Even among communities with comparatively low unemployment rates, our revised table 3 shows that there is a wide range of community assistance allocations. For example, the communities surrounding the Oak Ridge and Rocky Flats facilities had aggregate unemployment rates of 4.17 percent and 3.33 percent, respectively, and separated roughly the same number of workers—2,832 and 2,922—respectively. However, the communities surrounding Oak Ridge received $5,932 per separated worker versus $8,500 per separated worker for communities around the Rocky Flats facility. Finally, DOE states that Richland received less funding because its downsizing started later than in other communities. The fact that some facilities started their restructuring earlier than others may help explain some of the disparity in the allocation of community assistance funds. Nevertheless, because of the criteria DOE uses in providing community assistance, the disparity in the allocation of funds is not likely to be made up over time. In addition, the Secretary’s memorandums approving community assistance allocations generally do not describe the communities’ economic conditions nor do they discuss threshold or relative economic need in the decisions to fund community development. 4. DOE asserted that our comparison of the assistance provided to Richland and Oak Ridge was inaccurate for two reasons—incorrect allocation and unemployment data. First, as discussed under comment 3, we incorporated DOE’s community assistance figures. Even though Richland received more community funding than Oak Ridge, it received less per worker separated—Richland received $3,098 per job lost and Oak Ridge received $5,932 per job lost. Second, DOE challenged our analysis of these two facilities by using a single county’s (Roane) unemployment data for its Oak Ridge facility. As discussed in our second comment, this is a misapplication of EDA’s criteria. Following EDA’s criteria, we used the standard metropolitan statistical area for our analysis. Using the unemployment rate for the standard metropolitan statistical area surrounding Oak Ridge, rather than the unemployment rate for Roane County, results in an unemployment rate for Oak Ridge of 4.2 percent instead of 7.3 percent. Furthermore, DOE’s May 9,1997, Secretarial memorandum justifying $10 million in community assistance does not even discuss Roane County. However, as discussed in comment 2, if DOE believes that the county-level analysis more accurately reflects the economic impact of DOE’s restructuring than does the use of the standard metropolitan statistical area, then it should include this factor in its community assistance criteria. 5. DOE states that we inaccurately reflect how it assists workers displaced by defense-related reductions. It cites the consultant’s study that shows DOE’s program helped create more than 22,000 jobs. Like the consultant’s study, our draft report concurred that DOE helped to create and retain these jobs. However, the consultant’s study did not provide information on the extent to which DOE should receive credit for the jobs created and retained. We noted in the draft report that the DOE data contain jobs created and retained, while the local employment data we used from the Bureau of Labor Statistics include only jobs created. Therefore, our analysis is likely to overstate the impact of DOE’s job creation efforts in any given area. Furthermore, the consultant’s study did not measure the impact of other assistance in creating or retaining jobs, or analyze the extent to which a strong economy helped to produce these jobs. We maintain that DOE’s contribution had a relatively small impact on the overall growth of jobs in three of the six communities surrounding nuclear defense facilities for which we had comparable data. However, for three other communities, our draft shows that DOE contributed significantly to job growth. 6. DOE commented that our draft report incorrectly characterized enhanced retirement offerings. DOE provided us with additional information comparing its enhanced retirement offerings with those of other organizations, and we have revised the report accordingly. However, the formula for extended medical coverage and the provisions for relocation assistance offered by DOE were more generous than the benefits offered to separated federal civilian employees. For extended medical coverage for eligible contractor workers, DOE pays the full employer cost for the first year of separation and about half of that cost in the second year. Separated federal workers who are eligible and wish to retain extended medical coverage must pay the full cost, plus an administrative fee, for the coverage upon separation. DOE also commented that 17 of the 25 public and private sector employers identified in our 1995 report offered enhanced retirement. DOE’s interpretation is not exact. The report states that 17 of the 25 organizations offered early retirement programs and at least 10 of these programs offered some incentive for early retirement. The incentives generally gave employees credit for a specified number of years of service and/or a specified number of years added to their age; however, nine organizations also imposed penalties on the annuities of early retirees. 7. DOE said that the draft report is factually incorrect concerning involuntary separation benefits. DOE provided us with additional information on involuntary separation benefits offered at other organizations, and we revised our draft accordingly. 8. DOE contends that its management contractors offered extended medical benefits before the enactment of the worker and community transition program. The Office of Worker and Community Transition has since provided us with information supplementing its official comments indicating that a medical benefits program for displaced workers was approved by the Secretary of Energy on July 29, 1992. According to DOE’s comments, these benefits are limited to contractor-separated employees who cannot obtain coverage through an employer or spouse. We have revised our report accordingly. DOE also commented that our draft report did not include the wide range of additional benefit categories offered by other organizations. Based on DOE’s comments we revised table 2 that compared DOE benefits with other public and private sector severance packages offered from fiscal years 1993 through 1998. The revised table provides more detail of the benefits that were offered by the number of organizations that we identified. However, the benefit formulas in some of DOE’s workforce restructuring plans, such as those determining voluntary separation benefits and extended medical coverage, potentially allow more generous benefits than those offered for federal civilian employees. 9. DOE’s comment focuses on the overgeneralization of the data presented in table 2 of our draft report. This table compared DOE benefits with other public and private sector severance packages offered from fiscal years 1993 through 1998. DOE asserted that, overall, the frequency with which DOE contractors offered classes of benefits has not been substantially different than the frequency offered by other employers captured by private surveys. We agree and revised this table, as noted in comment 8. Finally, DOE commented that only a limited number of its sites offered some benefits. However, we note that DOE did not count benefits offered to its workforce when fewer than 10 individuals, or 1 percent of the separated workers, received benefits. Furthermore, DOE stated that because of qualification requirements, a large number of separated DOE workers were not provided with certain benefits, even when offered at a site. While these qualifications may preclude some separated workers from receiving a specific benefit, the benefit was still offered at a specific site. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the Department of Energy's (DOE) community assistance program for minimizing the impact of downsizing its contractor workforce, focusing on: (1) how much funding DOE had committed to spend and spent in support of its worker and community assistance program for fiscal years (FY) 1994 through 1998; (2) who received benefits during FY 1997 and FY 1998; (3) comparing DOE's separation benefits with the benefit packages of other federal and nonfederal organizations; and (4) what effect DOE's criteria had on determining which communities received assistance. GAO noted that: (1) DOE's assistance to separated contractor workers is reasonably consistent with the types of benefits offered by other government and private sector employers; (2) however, its community development assistance funds did not necessarily go to those communities most affected by downsizing or with the highest unemployment; (3) for FY 1994 through FY 1998, DOE obligated and spent about $1.033 billion on benefits for the contractor workers and communities affected by its downsizing; (4) about $853 million was spent on worker assistance and the rest on community assistance; (5) about $460 million of the $1.033 billion was provided by DOE's Office of Worker and Community Transition and the remainder by other DOE programs; (6) at the end of FY 1998, DOE had a carryover balance of $72 million, including $10 million in unobligated funds and $62 million in funds that were obligated but not yet spent; (7) most of the contractor workers separated during FY 1997 and FY 1998 received benefits under DOE's workforce restructuring program; (8) while DOE generally offered its separated contractor employees a large range of benefits, the value of the benefits varied widely, primarily because of the differences in benefits packages among sites and in the employees' length of service and base pay; (9) these benefit packages are reasonably consistent with the types of benefits offered by public and private employers; (10) DOE's community assistance criteria, which focus on the merits of individual projects and not on relative economic need, do not necessarily result in the most assistance going to the communities most affected by its downsizing or with the highest unemployment; (11) for example, for FY 1995 through FY 1998, the communities surrounding DOE's Richland, Washington, facility had more than twice the unemployment rate and nearly twice the DOE job loss of those surrounding the Rocky Flats, Colorado, facility, but Richland received $18 million less than the $24 million that Rocky Flats received; (12) had the Department of Commerce's Economic Development Administration unemployment and jobs lost criteria been used to evaluate the request for community assistance, Rocky Flats would have been ineligible for funding, given the strength of its employment; (13) in addition, 5 of the 8 DOE sites that received community assistance would have been ineligible under these criteria; and (14) furthermore, because most DOE assistance went to communities with relatively strong economies, the extent to which DOE's assistance aided in creating or retaining jobs is not clear. |
The Under Secretary of Defense (Personnel and Readiness)—who reports to the Deputy Secretary of Defense—is responsible for developing the DOD instruction for the conscientious objector application process and for monitoring all of the DOD components for compliance with the departmentwide instruction. The Secretaries of the components, or their designees, are responsible for implementing the process and for making final decisions on whether to approve or deny conscientious objector applications. According to Coast Guard officials, the Coast Guard’s Director of Personnel Management is responsible for overseeing its conscientious objector application process, including maintaining the instruction. However, the Director of Human Resources makes the final decision on whether to approve or disapprove conscientious objector applications. The Director of Personnel Management reports to the Director of Human Resources and—through the chain of command—to the Commandant of the Coast Guard. According to guidance and regulations established by the components, in order to be granted conscientious objector status, servicemembers must submit clear and convincing evidence that (1) they are opposed to participation in any form of war; (2) their opposition is based on religious, ethical, or moral beliefs; and (3) their beliefs are sincere and deeply held. These regulations do not recognize selective conscientious objection, that is, opposition to a specific war or conflict. The components’ regulations recognize two categories of applicants for conscientious objector status. A class 1-O applicant sincerely objects to all participation in any form of war and is discharged if the application is approved. A class l-A-O applicant sincerely objects to participating as a combatant in any form of war but has convictions that permit military service as a noncombatant. With the exception of the Army and its reserve components, the components have the discretion to either reassign an approved class l-A-O conscientious objector to noncombatant duties—if they are available—or discharge the servicemember. Army regulation states that servicemembers approved for 1-A-O status are not eligible for discharge. These servicemembers continue to serve the remainder of their contract and, when necessary, they are retrained in an occupational specialty that does not require them to bear arms. DMDC, which is a support organization within DOD that reports to the Under Secretary of Defense (Personnel and Readiness), maintains various types of data on military personnel, dating back to the early 1970s, such as separations data on servicemembers discharged as conscientious objectors. The majority of these data are provided to DMDC by the military components and are the source for the separations information now being provided to Congress. DMDC’s mission is to deliver timely and high-quality support to its customers and to ensure that the data it receives from different sources are consistent, accurate, and appropriate when used to respond to inquiries. DMDC customers include DOD organizations such as the Armed Forces, the Office of the Secretary of Defense, and the Joint Staff, as well as external organizations, such as Congress. These organizations rely on data supplied by DMDC to help them in making decisions about the military. DMDC’s Active Duty Military Personnel Transaction File and the Reserve Components Common Personnel Data System contain information about servicemembers who separate from the Army, the Navy, the Air Force, the Marine Corps, and the Coast Guard, and from their reserve components. The VA is responsible for providing a broad range of federal benefits and services to veterans and their families, working through the field facilities of its three major organizations located throughout the United States: The Veterans Health Administration manages and operates VA’s medical care system and administers health care benefits. The Veterans Benefits Administration manages and operates VA programs that provide financial and other forms of assistance to veterans, their dependents, and their survivors. This organization administers disability compensation, pension, vocational rehabilitation and employment, education and training, home loan guaranty, and life insurance benefits. The National Cemetery Administration operates 125 national cemeteries in the United States and its territories. It also oversees the operations of 33 soldiers’ lots, confederate cemeteries, and monument sites. The Board of Veterans’ Appeals is a statutory board that makes decisions on appeals under the authority of the Secretary of VA. Members of the board review benefit claims determinations made at the field facilities and issue decisions on appeals. (See fig. 1 for VA’s organizational structure.) In 1993, we reported that between fiscal years 1988 and 1990, DOD processed up to 200 applications annually for conscientious objector status and that about 80 to 85 percent of these applications were approved. During the Persian Gulf War, which was fought in fiscal year 1991, the number of applications rose to 447, and about 61 percent were approved. We noted in that report that, though the number of applications more than doubled in fiscal year 1991, it was small compared to the total number of military personnel, indicating that conscientious objectors had no measurable impact on the readiness of the all-volunteer force. Despite possible understatement, the numbers of known applications for conscientious objector status for calendar years 2002 through 2006 were relatively small compared to the size of the force, which is approximately 2.3 million servicemembers. (See app. II for a detailed description of the methods we used to determine data reliability.) Of the 425 applications for conscientious objector status the components reported that they processed during this period, 224, or about 53 percent, were approved; 188, or about 44 percent, were denied; and 13, or about 3 percent, were pending, withdrawn, closed, or no information was provided. Further, these data show that the overall number of reported applications for conscientious objector status increased in 2003 and 2004 and then dropped in 2005 and 2006 (see table 1). DMDC-provided data similarly shows a small number of separations, or discharges, for conscientious objectors. See appendix III for more information from the DMDC-provided separations data. The application approval rate was 55 percent for the Army, 84 percent for the Navy, 62 percent for the Air Force, 33 percent for the Marine Corps, and 33 percent for the Coast Guard. The application approval rate was 44 percent for the Army Reserve, 58 percent for the Army National Guard, and 44 percent for the Marine Corps Reserve. Although there were 188 applications for conscientious objector status, these applications were submitted by only 186 servicemembers, because two servicemembers applied twice. Of the 186 servicemembers whose applications were denied, 114 (about 61 percent) were discharged for other reasons; 62 (about 33 percent) are still serving in the military service; and there is no information about the remaining 10 (about 5 percent). Of the 114 servicemembers who were discharged for other reasons, 33 (about 29 percent) separated after completion of their service contract; 21 (about 18 percent) were discharged for misconduct; 22 (19 percent) were separated for medical reasons; 22 (about 19 percent) were separated for miscellaneous reasons, including substandard performance and hardship; and 16 (about 14 percent) did not have a code designating the reason for the discharge. All components of the Armed Forces follow the same basic steps to administer their conscientious objector application processes. Figure 2 illustrates the eight steps in the process. As shown in the process flowchart, the components attempt to reassign an applicant to noncombatant duties while an application is pending. Officials responsible for the conscientious objector process for each component said that the commanding officer reassigns the applicant. While temporarily assigned to noncombatant duties, an applicant must continue to meet the standards of acceptable conduct and performance of duties, such as wearing the uniform and following orders. If noncombatant duties are unavailable, an applicant must continue to fulfill the duties within the unit. Officials from the active and reserve components of the Air Force and the Marine Corps stated that, in the event that an applicant’s unit is deployed while the application is pending, the applicant will not be deployed. In contrast, officials for the other components said an applicant may deploy with his or her unit at the discretion of the commanding officer or authorized official. We inquired about the extent to which psychiatrists or clinical psychologists are readily available to interview and evaluate the mental condition of the applicants. The components’ visibility over the availability of psychiatrists and clinical psychologists varied. Army, Army National Guard, Army Reserve, Air Force, and Air Force Reserve officials reported that they were not aware of any difficulties in obtaining a psychiatric or psychological evaluation. Navy and Marine Corps officials said that they did not have visibility over this issue for either their active or reserve components, because responsibility for obtaining such evaluations resides at the unit level. An Air National Guard official said that the component has a limited number of personnel who can conduct such an evaluation and that when one of these professionals is not available locally, the process may be delayed. Coast Guard officials said that in remote units in the active and reserve components where a psychiatrist or clinical psychologist is not readily available, processing is delayed. In addition, each component’s process includes provisions to allow the applicant to be (1) represented by legal counsel, (2) given the opportunity to rebut the evidence in the record before the authorized official makes a final decision, and (3) given an explanation if the application is denied. According to their regulations, all components allow an applicant to obtain and pay for outside legal counsel. In addition, officials responsible for the conscientious objector process for the Army, the Navy, the Navy Reserve, the Air Force, the Air Force Reserve, the Marine Corps, and the Marine Corps Reserve said that an applicant has access to free legal advice from these components’ legal offices. Each component provides an applicant with the opportunity to rebut information included in the record. The applicant submits the rebuttal prior to the final processing of the application. The time frame to submit a rebuttal varies among the components and ranges from 5 to 15 days. On the basis of data provided by the components for calendar years 2002 through 2006, the military services took an average of about 7 months to process an application—this includes the time allowed for applicants to submit their rebuttals. The Air Force Reserve typically took the longest amount of time to process an application, at an average of nearly a full year (357 days), while the Navy’s processing time averaged about 5 months (160 days). According to component officials, processing may be prolonged when, for example, applications must be returned to the unit or the applicant for additional information. As stated earlier, Air National Guard and Coast Guard officials said that personnel who can conduct psychiatric or psychological evaluations are not always readily available and that this may prolong the processing time. Coast Guard officials also stated that, because they receive so few applications, it is necessary for officials located in the field offices to reeducate themselves about the process each time, which may prolong processing time for the applications. Table 3 shows average application processing times by component. According to the components, the commanding officer typically informs an applicant if he or she has or has not met the burden of proof necessary to establish the claim. In addition, officials for the Army, the Air Force, the Marine Corps, the Coast Guard, and their reserve components stated that when an application has been denied, the applicant is sent a memorandum providing additional detail on the reason for the decision. Generally, applications are denied when the servicemember has not provided clear and convincing evidence supporting his or her claim of conscientious objection. Each of the components—with the exception of the Army and its reserve components—has the discretion to reassign an approved l-A-O conscientious objector to a noncombatant duty—if one is available—or discharge the servicemember. In contrast, according to Army regulation, 1-A-O conscientious objectors in the Army and its reserve components are not eligible for discharge. According to Army officials, these servicemembers continue to serve the remainder of their service obligations, and when necessary are retrained in occupations that do not require them to bear arms. In general, in accordance with component policies, servicemembers separated as conscientious objectors may be granted honorable or under honorable conditions (general) discharges, thereby making them eligible to receive the same benefits as other discharged servicemembers. Army, Navy, and Air Force regulations state that conscientious objectors must be given one of these two types of discharge. The Marine Corps and the Coast Guard do not specify what type of discharge must be assigned to conscientious objectors; rather, their regulations state that the type of discharge should be determined by the member’s overall service record. In accordance with VA guidance, conscientious objector status generally is not considered when determining eligibility for any of the benefits VA offers; the primary determinant for these benefits is the characterization of discharge. All servicemembers separated with an honorable or an under honorable conditions (general) discharge are eligible for the same VA benefits, with the exception of Montgomery GI Bill-Active Duty Education and Training benefits. Whether discharged as a conscientious objector or for other reasons, a servicemember must receive an honorable discharge to be entitled to Montgomery GI Bill-Active Duty Education and Training benefits. In addition to the characterization of discharge, a servicemember may have to meet other eligibility requirements—including years of service, period of service (e.g., during a period of war), or an injury or disease that was incurred or aggravated during military activity— to receive certain VA benefits. Table 4 provides an overview of the VA benefits available to veterans and the basic eligibility requirements for each. To apply for VA benefits, a veteran submits an application to a veterans’ claims examiner or other qualified VA employee at a VA field facility, where it is reviewed to ensure that it is complete and that the applicant meets basic eligibility requirements. If it is determined that the veteran does not meet basic eligibility requirements (i.e., the characterization of discharge is not honorable or under honorable conditions (general)), then the examiner or other qualified VA employee will notify the veteran that he or she is not entitled to benefits. The veteran can then (1) seek an upgrade in the characterization of his or her discharge through the military component and, if successful, provide the revised discharge papers to VA or (2) provide the examiner or other qualified VA employee with evidence of mitigating circumstances that could lead VA to revise its determination of the veteran’s eligibility. Even if the veteran does not provide additional information, the examiner or other qualified VA employee will review the veteran’s military personnel and service record to determine if (1) there were mitigating circumstances surrounding the discharge; (2) there is a period of service, other than the one for which the veteran was discharged, upon which the benefits may be based; or (3) despite the characterization of discharge, the veteran’s service was faithful or meritorious. For example, if it is determined after a review of the military personnel and service record that the veteran received an under other than honorable conditions discharge because of an absence without official leave to see a dying parent, the veteran may still receive VA benefits. If an examiner or other qualified VA employee determines that a veteran with an under other than honorable conditions or bad conduct discharge is not eligible for most VA benefits, the veteran may still be eligible for health care for any disability incurred or aggravated in the line of duty during active service, unless the veteran is barred from receiving VA benefits. If the veteran’s military personnel or service record indicates that he or she refused to perform military duties, wear the uniform, or comply with lawful orders of a competent military authority while the conscientious objector application was pending, the veteran is barred from receiving VA benefits. The decision of the examiner or other qualified VA employee applies not only to those benefits that the veteran was requesting at the time of the decision but also to any future benefits he or she may seek, except for education and training, for which the discharge must be honorable. A dishonorable discharge automatically disqualifies a veteran from receiving benefits; the examiner or other qualified VA employee does not make decisions on dishonorable discharges. A veteran who disagrees with the decision has 1 year to file an appeal with the VA Board of Veterans Appeals. When the case comes before the Board of Appeals, the veteran may be represented by legal counsel. If the board decides in favor of the veteran, the veteran will be awarded the benefit in question. If the board upholds the decision to deny benefits, the veteran can appeal to the U.S. Court of Appeals for Veterans Claims, which is an independent court and not part of the VA. Of the 224 servicemembers who were approved for conscientious objector status during calendar years 2002 through 2006, 207 (92 percent) were granted honorable discharges; 14 (6 percent) were granted under honorable conditions (general) discharges; and no information on the discharges of the remaining 3 (1 percent) was available (see table 5). DOD, the Department of Homeland Security, and VA were provided a draft of this report and had no comments on the findings. The Department of Homeland Security and VA provided technical comments, which were incorporated as appropriate. We will send copies of this report to interested Members of Congress, the Secretary of Defense, the Secretary of Homeland Security, and the Director of the Department of Veterans Affairs. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. In calendar years 2002 through 2006, 81 percent of the applicants were enlisted males. In addition, the majority of male applicants were between the ages of 21 and 25. The occupational area for the majority of the applicants was general infantry (which includes weapons specialists and special forces, among others), and most of the applicants also had between 1 to 4 years of service. On the basis of the information shown in table 7, we determined that 83 percent of male applicants for conscientious objector status were 30 years old or younger. Eighty-four percent of female applicants were 30 years old or younger. Table 8 shows that 43 percent of applicants had 1 to 2 years of service, and 32 percent had 3 to 4 years of service. On the basis of component-provided data, we were able to determine that during calendar years 2002 through 2006, 154 of the 202 applicants for which these data were provided had participated in Operation Noble Eagle (ONE), Operation Enduring Freedom (OEF), or Operation Iraqi Freedom (OIF) (see table 9). Of the 154 who served in these operations, 153 were from Army or Marine Corps components. Our review of component-provided data found that servicemembers who applied for conscientious objector status worked in a variety of occupational areas. The top five occupational areas for the 377 enlisted servicemembers for calendar years 2002 through 2006 were general infantry, which includes weapons specialists, ground reconnaissance specialists, special forces, and military training instructors, with 42 applicants; other functional support, which includes supply accounting and procurement, transportation, flight operations, and related areas, with 16 applicants; medical care and treatment, which includes surgical and therapy specialists, with 16 applicants; security, which includes specialists who guard weapon systems, defend installations, and protect personnel, equipment, and facilities, with 14 applicants; and combat engineering, which includes specialists in hasty and temporary construction of airfields, roads and bridges, and in demolition, field illumination, and chemical warfare, with 14 applicants. Of the 33 officer applicants, the three largest occupational types included 6 applicants whose occupations were designated as unknown (i.e., officer unknown occupation); 5 who were in ground and naval arms, which includes infantry, artillery, armor and close support officers, and naval ship commanders and other warfare-related officers; and 3 who were in the occupational area of student officers, which includes law students, medical students, and other trainees. To meet our first objective—to identify trends in the number of servicemembers applying for conscientious objector status during calendar years 2002 through 2006—we obtained data from each of the components. We did not report data between September 11, 2001, and December 31, 2001, as directed in the mandate, because several of the components were unable to provide data for this time period. The Army National Guard and the Air Force did not provide any data to us for this period of time. Navy officials reported receiving five applications in 2001, but they said that they were not confident that this information was accurate. We found that the data provided by the components could underrepresent the total number of applications for conscientious objector status because applications could be withdrawn during the application process before they reach the headquarters level. However, we believe that the data are sufficiently reliable to demonstrate overall trends in the numbers of applications that were approved and denied during calendar years 2002 through 2006. The Defense Manpower Data Center (DMDC) does not maintain separate data on the numbers of applications for conscientious objector status; however, it does maintain data on personnel, including demographics and reasons for separation, dating back to the early 1970s. We therefore used DMDC data for these purposes. To assess the reliability of all data presented in this report, we obtained an understanding of the sources of the data and the file structures. Specifically, we (1) performed electronic testing of the data variables for completeness (that is, duplicative and missing data); (2) assessed the reasonableness of the data by comparing data provided by the components with data provided by DMDC; (3) reviewed existing information about the systems that produced the data; and (4) interviewed component and DMDC officials to identify known problems or limitations in the data, as well as to understand how data are received from each of the components and processed by DMDC. When we found discrepancies (for example, duplicate Social Security numbers), we worked with the appropriate components and DMDC to understand the reasons for the discrepancies. To meet our second objective—to determine how each component of the U.S. Armed Forces administers its process for approving or denying conscientious objector applications—we reviewed relevant guidance and regulations, including DOD’s instruction. We interviewed officials responsible for each component’s current practices for (1) reviewing conscientious objector applications, including the roles and availability of key personnel (e.g., chaplains and medical personnel); (2) reassigning servicemembers with pending applications; and (3) approving or denying servicemembers’ applications. Finally, we used component-provided data (e.g., application start dates) to calculate the average processing time for conscientious objector applications. To meet our third objective—to determine whether conscientious objectors are eligible to receive the same benefits that other servicemembers are eligible to receive after they are discharged from the military—we analyzed applicable laws and instructions from VA, DOD, and the components. We also interviewed VA, DOD, and component officials about the benefits available to conscientious objectors and other servicemembers upon discharge. We reviewed component-provided data to determine the characterization of discharge (e.g., honorable) received by the servicemembers separated as conscientious objectors. To obtain demographic information on applicants for conscientious objector status, we provided DMDC with applications data provided by the components; DMDC then matched this information to personnel data it maintains. In conducting this work, we contacted the appropriate officials from the following organizations (see table 10). We performed our work from November 2006 through August 2007 in accordance with generally accepted government auditing standards. The Department of Defense (DOD) reported to Congress that 44 of the 197,786 servicemembers separated in fiscal year 2006 were discharged as conscientious objectors, and 39 of the 214,353 servicemembers separated in fiscal year 2005 were discharged for that reason. As reported, the numbers of servicemembers separated as conscientious objectors represent about two-tenths of 1 percent of the total separations from DOD (see fig. 3). DMDC-provided data showed that 547 servicemembers were discharged as conscientious objectors between calendar years 1994 and 2006. The number of conscientious objectors has decreased from 61 in 1994 (during a period when the services were larger) to 46 and 36 during calendar years 2005 and 2006, respectively. These numbers are very small, given the size of the total force—approximately 2.3 million servicemembers. Brenda S. Farrell, (202) 512-3604 or farrellb@gao.gov. In addition to the contact above, Cynthia Jackson, Assistant Director; Minty M. Abraham; Kurt A. Burgeson; Fatema Z. Choudhury; Kenya R. Jones; Mitchell B. Karpman; Ronald La Due Lake; Joanne Landesman; Julia Matta; Lonnie J. McAllister II; Anna Maria Ortiz; Kimberly L. Perteet; Maria-Alaina I. Rambus; Beverly C. Schladt; Derek B. Stewart; and Jennifer M. Thomas made key contributions to this report. | Section 587 of the John Warner National Defense Authorization Act for Fiscal Year 2007 required GAO to address (1) the trends in the number of conscientious objector applications for the active and reserve components during calendar years 2002 through 2006; (2) how each component administers its process for evaluating conscientious objector applications; and (3) whether, upon discharge, conscientious objectors are eligible for the same benefits as other former servicemembers. GAO's review included the Coast Guard components. GAO compiled numbers of applications based on data provided by the Armed Forces. However, these numbers do not include the numbers of applications that are not formally reported to the components' headquarters. Also, the Defense Manpower Data Center does not maintain separate data on numbers of applications for conscientious objector status; it does maintain data on reasons for separation. GAO used these data to help assess the reasonableness of the component-provided data and to compile demographic data. During calendar years 2002 through 2006, the active and reserve components reported processing 425 applications for conscientious objector status. This number is small relative to the Armed Forces' total force of approximately 2.3 million servicemembers. Of the 425 applications the components reported processing, 224 (53 percent) were approved; 188 (44 percent) were denied; and 13 (3 percent) were pending, withdrawn, closed, or no information was provided. Each component considers applications from servicemembers who wish to be classified as conscientious objectors. Each component's process is essentially the same, taking an average of about 7 months to process an application. After the servicemember submits an application, arrangements are made for a military chaplain and a psychiatrist to interview the applicant. An investigating officer holds a hearing and prepares a report. An authorized official or board makes the final decision and informs the commanding officer, who informs the applicant that he or she has or has not met the burden of proof necessary to establish the claim. Officials from all the components stated that they attempt to temporarily reassign applicants to noncombatant duties while their applications are pending. Conscientious objector status is not considered when determining eligibility for benefits; the primary determinant is the type of discharge--honorable or under honorable conditions (general). Of those 224 servicemembers whose applications were approved for conscientious objector status, 207 received honorable discharges, 14 received general discharges, and information on the remaining 3 was not available. In addition to the characterization of discharge, a servicemember may have to meet other eligibility requirements--including years of service--to receive certain Veterans Affairs benefits. |
CPSC was created in 1972 under the Consumer Product Safety Act to regulate certain consumer products and address those that pose an unreasonable risk of injury; assist consumers in evaluating the comparative safety of consumer products; and promote research and investigation into the causes and prevention of product-related deaths, injuries, and illnesses. CPSC’s jurisdiction is broad, covering thousands of types of manufacturers and consumer products used in and around the home and in sports, recreation, and schools. CPSC does not have jurisdiction over some categories of products, including automobiles and other on-road vehicles, tires, boats, alcohol, tobacco, firearms, food, drugs, cosmetics, medical devices, and pesticides. Other federal agencies—including the National Highway Traffic Safety Administration; Coast Guard; Bureau of Alcohol, Firearms, Tobacco, and Explosives; Department of Agriculture; Food and Drug Administration; and Environmental Protection Agency—have jurisdiction over these products. Consumers and others previously were able to report safety problems or concerns about consumer products through CPSC’s toll-free hotline; the U.S. mail, or a form on CPSC’s website submitted through e-mail and they can continue to use these methods in lieu of submitting reports through SaferProducts.gov. CPSIA and CPSC define “harm” as injury, illness, or death or risk of injury, illness or death. 15 U.S.C. § 2055a(g); 16 C.F.R. § 1102.6(b)(4). be published. As required by statute, CPSC disclaims any responsibility to guarantee the accuracy of a report. The submitter of an incident report on SaferProducts.gov must fit into one of five categories: (1) consumers; (2) local, state, and federal government agencies; (3) health care professionals; (4) child service providers; and (5) public safety entities. CPSC regulations specify that “consumers” include, but not be limited to, users of consumer products, family members, relatives, parents, guardians, friends, attorneys, investigators, professional engineers, agents of a user of a consumer product, and observers of the consumer products being used. CPSIA requires the following information when submitting a report of harm: (1) description of the consumer product sufficient to distinguish the product as a product or component part regulated by CPSC; (2) identity of the manufacturer or private labeler by name; (3) description of the harm related to use of the consumer product; (4) approximate or actual date of the incident; (5) category of submitter; (6) submitter’s contact information; (7) submitter’s verification that the information contained therein is true and accurate; and (8) consent to publication of the report of harm. 15 U.S.C. § 2055a(b)(2)(B); C.F.R. § 1102.10(d). Subject to §§ 1102.24 and 1102.26, CPSC will publish reports of harm containing all the required information. required to contact the submitters for further information.transmits a copy to manufacturers, importers, and private labelers identified in the reports, to provide them with the opportunity to comment. Qualifying reports and manufacturer comments submitted for publication are then available on SaferProducts.gov (see fig. 1). CPSC’s efforts to promote SaferProducts.gov formed part of a larger effort to increase the public’s awareness of the agency. CPSC has taken a variety of approaches to inform the public about SaferProducts.gov, many of which are consistent with key practices for consumer education planning. However, CPSC has not established metrics for its efforts. As a result, the agency does not know which of its efforts have had the most impact on increasing awareness and use of SaferProducts.gov. CPSC’s efforts to inform the public about SaferProducts.gov have been part of a larger effort to increase the public’s awareness of the agency. According to CPSC officials, certain segments of the public may not be aware of the agency or its mission in product safety, much less be aware of SaferProducts.gov. Likewise, roughly one-third of the 37 consumers who participated in our website usability tests were aware of CPSC or its mission. To promote awareness of CPSC, officials have conducted public information campaigns related to various product safety hazards such as fire hazards and those involving children’s products, issued press releases about product recalls, and used social media. Officials said that media stories promoting the use of SaferProducts.gov have had the benefit of promoting CPSC as a resource not only for information about product recalls (for which the agency is most commonly known), but also as a place where consumers can raise concerns about the safety of consumer products. In addition to the outreach efforts noted above, CPSC has planned initiatives to assess the public’s awareness of the agency as a whole. In fiscal year 2011, CPSC’s Office of Communications received funds to award a contract to plan and conduct field surveys to assess consumer awareness of the agency. CPSC and a contractor are developing the survey tool. These surveys are to cover such areas as the public’s knowledge and awareness of the safety issues for which CPSC is responsible, how the agency’s work affects consumers, and how the public responds to product recalls and other safety hazards that CPSC communicates. CPSC officials told us that they plan to administer the survey in 2013, but have been awaiting approval of the survey from the Office of Management and Budget. CPSC also recently redesigned its main website, CPSC.gov, based on feedback from the public. According to CPSC officials, this redesign allowed the agency to provide a more visible link to SaferProducts.gov. As it has for publicizing the agency, CPSC has used a variety of approaches to inform the public about SaferProducts.gov, including the use of social and other media. Before launching SaferProducts.gov, CPSC hosted a web conference on January 11, 2011, to inform interested stakeholders such as consumer groups and the public about the site’s search function and the information required to submit an incident report.2011, CPSC promoted the new website through print and other media. According to CPSC officials, the agency’s promotional strategy emphasized both the public’s ability to search SaferProducts.gov for reports and submit such reports. In addition, near the 1-year anniversary of SaferProducts.gov, CPSC launched three public service Around the time it launched SaferProducts.gov in March announcements (PSAs) about SaferProducts.gov, sending these PSAs to local and national media and making them available on online media channels, such as YouTube (http://www.youtube.com). According to CPSC officials, the agency has a contract with a video production company to produce and distribute the videos. CPSC officials said that the PSAs have been among the 10 most-viewed videos on CPSC’s YouTube channel. They added that it was difficult to attract extensive television coverage or the best airtime slots given CPSC’s PSA budget of about $50,000 for fiscal year 2012. Further, the officials said that PSAs can cost from $700,000 to $1 million to produce, distribute, and air during prime viewing or listening times. The agency also has distributed informational materials to target audiences at conferences and community events; referenced the site in speeches and presentations by the Chairman, Commissioners, and staff; and held press interviews to promote the site, according to CPSC officials. For example, CPSC developed a series of brochures, including some tailored to specific professional sectors, such as health care, child care, public safety, and government. CPSC officials noted that they have mentioned the site at conferences, particularly those aimed at minority populations and professional groups. The agency also has made a data feed of the incident reports available to third-party software developers to create mobile applications and provided information for developers in a frequently asked questions page on SaferProducts.gov. In conducting its public information efforts, CPSC has employed a number of strategies consistent with key practices for consumer education planning that we identified in a prior report. For example, CPSC has worked with stakeholders such as consumer groups (a key practice) to promote SaferProducts.gov, and used a variety of media (another key practice) to promote the site. CPSC also has identified “messengers” such as consumer groups and state attorneys general to assist with publicity, and identified the resources needed for publicity (other key practices). Most of the consumer product safety experts we interviewed from nine groups representing consumers, researchers, and various industries stated that CPSC has been taking appropriate measures to promote the site. However, some also suggested that CPSC could conduct more targeted outreach to other professional groups, such as those in health care, and other populations, such as parents. While CPSC has employed many of the key practices for consumer education planning as described previously, it has not employed one of the key practices that could further improve the efficacy of its outreach for SaferProducts.gov. Specifically, CPSC has not established metrics, such as process and outcome metrics, to measure the success of its outreach In its 2013 performance budget request, as part of an effort to efforts.increase awareness of the agency, CPSC has a goal for the number of visits to CPSC.gov. However, CPSC does not have a similar goal for the number of visits to SaferProducts.gov, although it collects such data (as discussed in the next section of this report). Similarly, CPSC has not determined whether its efforts to publicize SaferProducts.gov at conferences or through PSAs have led to increased use of SaferProducts.gov after the events. CPSC also has not incorporated tools or features on the site (such as a drop-down menu on the homepage that would ask users to select an option such as “conferences,” “PSAs,” “printed materials,” or “media”) to identify how the user learned about and arrived at the site. The information generated by such tools also may provide CPSC with ideas for additional metrics to measure awareness and use of the site. CPSC has not established metrics to evaluate its outreach efforts for SaferProducts.gov because the agency has been focused on increasing awareness of CPSC and improving the functionality of CPSC.gov. CPSC officials said that in comparison with SaferProducts.gov, CPSC.gov received almost 10 times as many visits each month. Officials have said they may focus on evaluating outreach efforts for SaferProducts.gov in the future. However, without current metrics to assess the efficacy of its outreach for SaferProducts.gov, CPSC will not know which of its efforts— for instance, promoting the site at conferences and using PSAs—have had the most impact on increasing awareness and use of SaferProducts.gov, or be able to best target its limited resources to increase use of the site. CPSC collects limited data about the use of SaferProducts.gov. To track use, CPSC collects data on the number of visitors, most frequently visited pages, and number of reports received, among other metrics. CPSC also collects some data about the category of person who is submitting a report. However, CPSC does not collect any data about who is using the site to search for information. In particular, CPSC has not sought to collect demographic data, such as age, gender, or income. In mandating this report, Congress required us to assess whether a broad range of the public uses the site. However, CPSC’s limited data collection related to use of the site made it difficult to conduct such an assessment. According to CPSC officials, the agency’s primary measure of the extent of use of SaferProducts.gov is the number of visitors each month. CPSC collects these data through web analytics software. According to CPSC’s data, visits to SaferProducts.gov exceeded 100,000 each month since June 2011, a few months after the launch of the site (see fig. 2), peaking at about 238,000 in November 2012. CPSC officials have not been able to identify the reasons for the increase in visits. CPSC also collects data on the most frequently visited pages each month (see fig. 3). These data show that users frequently used the site to search for information—for example, to search for recalled products or incident reports submitted by other users of the site. CPSC also collects data on the number of reports received each month through SaferProducts.gov, as well as by phone, e-mail, postal mail, and fax. These data show that users submitted more than 1,000 reports from all sources each month from March 2011 through December 2012 (see fig. 4). CPSC collects some data about the categories of persons using SaferProducts.gov to submit incident reports but does not collect additional data such as age, gender, or income level of the submitters or others who use the site to search for information. When completing a report, CPSC requires submitters to state whether they are consumers, represent a government agency, or are health care or other professionals, among other categories of user. As shown in table 1, our analysis of more than 12,000 reports posted on SaferProducts.gov as of January 2013 found that most report submitters—about 97 percent—identified themselves as consumers, results consistent with our prior reporting. As stated previously, “consumers” include, but are not limited to, users of consumer products, family members, relatives, friends, attorneys, investigators, and others. Representatives of government agencies and public safety entities, as well as health care professionals, child service providers, and medical examiners and coroners also submitted reports. CPSC also asks report submitters to state their relationship to the victim of the incident (such as self, parent, or spouse). As shown in table 2, of those who identified themselves as consumers, most identified themselves as the victims of an incident. However, many submitters did not specify a relationship. Of those who did specify a relationship to the victim, 4,463, or 60 percent, reported that they were the victims, and 1,867, or 25 percent, reported that their child was the victim (see table 3). CPSC asks that submitters specify the location of the reported incident, including the country and state. Most submitters providing this information—about 90 percent—reported that the incident took place in the United States (see table 4). Submitters also reported that incidents took place in other countries or did not specify where the incident took place. In addition, states with the highest population—such as California, Texas, New York, Florida, and Illinois—had the most reported incidents (see fig. 5). Beyond these data, CPSC does not request or obtain additional details about the users of SaferProducts.gov. According to CPSC officials, the agency also cannot distinguish new users from returning users because CPSC’s web analytics software has not been configured with “cookies” to capture these data. Officials have cited resource and privacy concerns as reasons for not collecting these data, although they said they have been considering using cookies in the future. In addition, CPSC does not collect more specific demographic information such as age, gender, or income level from the submitters of reports or other site users, citing an interest in minimizing the reporting burden on users. As an example, CPSC has not requested that site users voluntarily provide this information during the report submission process or after submitting a report. Congress required us to assess whether a broad range of the public uses SaferProducts.gov, but CPSC’s limited data collection made it difficult to conduct such an assessment. In addition, standards for internal control in the federal government state that agencies should have timely, relevant information for management decision-making purposes.result of its limited data collection about users of the site, CPSC has been limited in its ability to target its marketing and outreach efforts on specific groups, populations, or areas to achieve the goal of increasing use of the site. As discussed earlier in this report, our website usability tests focused on asking consumers in our testing sessions to judge if SaferProducts.gov was easy to use. We had the consumers perform various tasks (such as searching for recalled products and submitting mock incident reports) and asked for opinions about the site’s usefulness. A moderator facilitated the sessions and we elicited feedback from participants. In addition, a GSA official with expertise in website usability assessed SaferProducts.gov, and another GSA official reviewed the site for website accessibility. Many consumers in our testing sessions generally found SaferProducts.gov easy to use, but they encountered difficulties with certain aspects of the two main functions: searching for information and Of the 37 consumers who participated in our submitting incident reports.testing sessions, 20 found SaferProducts.gov easy to use as indicated by their responses to a questionnaire we administered following each test session. For example, almost all the consumers were easily able to determine what initial steps to take to search for or report a product that may be unsafe. In addition, the expert evaluator reviewing the site at our request described the site as clean and easy to navigate. In conducting the search tasks, consumers generally were able to find recalled products using basic key word searches. But some search functions, including those that required more complicated searches such as use of an advanced search function to narrow results, posed challenges. For example, in one testing session, no consumers were able to complete a task that required them to narrow their search by injury, time period, and location. In another session, the calendar function, which filters the results by time period, posed particular challenges. Five of the eight consumers in that session experienced difficulties in having to enter and, when seeking to make one change, re-enter all the dates to focus their search on products recalled within a particular time period. The expert evaluator from GSA experienced similar challenges in using the calendar function. In addition, when asked to search for and compare safety information for two products—one for which there were search results and one for which there were none—almost all the testers had difficulty interpreting the lack of search results for the latter product. For example, while some testers assumed that a search for a product that produced no results indicated that the product was safe, others did not make this presumption. In our testing sessions, most consumers were not sure which product to purchase based on their searches and roughly a quarter indicated that they would leave SaferProducts.gov to search other websites if they found no results on SaferProducts.gov. In contrast to SaferProducts.gov, other websites inform users of a possibly incorrect search term, such as a typographical error, which helps users interpret the results of their searches and identify potential errors. During the usability tests, consumers experienced fewer challenges using the reporting function than the search function. To submit an incident report, consumers must enter information on a series of pages that include a combination of required and optional fields. During our testing, consumers found the instructions for submitting a report to be generally clear. For example, almost all the testers thought the instructions for submitting information about the incident, product, and victims were clear. However, 15 of the 37 consumers in our test sessions expressed concern about apparently needing to register before submitting a report and generally did not notice that they could continue without registering (see fig. 6). By registering on SaferProducts.gov, site users can save their reports to complete at a later time and receive updates on the status of reports. When reaching the registration page, over a third of the consumers in our focus group sessions said that they would not be inclined to register. In one session, seven of nine consumers said they would not be inclined to register and thought that having to register was a deterrent to completing a report. In another session, none of the participants noticed the option to skip registration. Some of those who noticed that they could skip registration emphasized that the option should be more prominent—for example, placed alongside the registration box rather than below it where it might not be immediately visible. Likewise, as an issue of website usability, the expert evaluator from GSA reviewing the site on our behalf noted that the “continue without registering” option was not prominent enough and stated that registration may deter users from continuing the report submission process. In addition, some consumers in our testing sessions said that the reporting pages contained too many questions and described the submission process as cumbersome, particularly for busy individuals such as parents. To address this, one consumer suggested grouping all of the required fields on one page. The expert evaluator from GSA also suggested that all the questions in the reporting process should be reviewed to determine if each was necessary. When CPSC first developed SaferProducts.gov, the agency conducted three focus groups—one with consumers and one with professionals—to test the site and assess users’ experience with it.testing only addressed the incident reporting function, not the search function, and focused on (1) awareness of how and where to submit a safety complaint and (2) general reactions to the site. CPSC has not conducted additional usability testing since launching SaferProducts.gov in March 2011. As mentioned previously, CPSC officials have said that issues such as assessing the level of awareness of CPSC and redesigning CPSC.gov were higher priorities than assessing and improving SaferProducts.gov. CPSC’s focus group A number of resources across the federal government are available to help agencies in making their websites more usable. For example, as cited previously, GSA’s First Fridays Usability Testing Program is designed to teach agency officials how to find and fix usability problems at no cost to the agency. The program’s services are (1) formal tests, (2) quick tests, (3) mobile tests, (4) observation, and (5) expert evaluation.GSA also offers DigitalGov University, which includes courses in web design and usability best practices. In addition, the Department of Health and Human Services (HHS) operates two usability labs, both of which are free of charge to other federal agencies, to evaluate websites to ensure that they are easy-to-use and useful. Furthermore, GSA and HHS maintain HowTo.gov and Usability.gov, respectively, to provide guidance and resources to help agencies create websites that are usable, useful, and accessible. Because of the usability issues in the areas we identified, consumers may not take advantage of all the features of SaferProducts.gov, and consumers may be dissuaded from completing and submitting incident reports. As a result, CPSC may not be obtaining all possible information from consumers that can help inform its safety assessments and other regulatory efforts. None of the consumers in our test sessions previously had heard of SaferProducts.gov, although a few were familiar with CPSC as an agency involved in recalling certain products. In addition, 5 of the 37 consumers who participated in our tests said that the purpose of SaferProducts.gov was not clear based on its name and the initial information on the home page. In our testing sessions, roughly a third of the consumers commented that the name of the website—SaferProducts.gov—and the home page did not accurately convey what consumers could and could not do on the site. For example, when asked about their impressions of SaferProducts.gov, over a quarter of the testers thought that they would find information about safe products, such as a list of products that meet certain standards or a rating of products. These consumers did not appear to notice information on the home page indicating that they would only find information on unsafe products (see fig. 7). In our testing sessions, several consumers commented that the website would be more aptly named UnsafeProducts.gov. In addition, during our one-on-one testing sessions in Washington, D.C., two of the testers had difficulty distinguishing between recall notices and incident reports, which serve Likewise, although the expert evaluator from GSA different purposes.was able to obtain a general sense of the purpose of the site, he noted that a tagline (brief text that gives users an immediate idea of what the site does) would help reinforce the site’s purpose. In one testing session, a few consumers also said that it was not apparent from looking at the home page that CPSC did not regulate certain categories of products, such as automobiles and medications, although more than half of the consumers in our testing sessions said at the outset that they routinely searched online for safety information on particular products. Similarly, none of the consumers in our testing sessions noticed that they could be directed to the agency’s main site, CPSC.gov, by clicking on certain links in SaferProducts.gov. Only the expert evaluator noticed that by opening a recall notice, the website user would leave SaferProducts.gov and go to CPSC.gov. However, as consumers completed the various tasks in the testing sessions, they better understood the website’s features and functions. In responding to our closing questions about their overall experiences in using the site, most said that they would use SaferProducts.gov again now that they were aware of it. For example, some consumers found information about product recalls the most useful component of the site and said they would give more weight to this information. In our testing sessions, about one-quarter of the consumers also found value in the incident reports, noting that they helped website users understand whether products that had not yet been recalled had safety issues. Two of the consumers commented that they found the content of the reports to be more credible than other websites that provide a forum for consumer complaints. In addition, a few consumers pointed to the amount of detail in the reports, such as the incident description, location, and date as particularly helpful. Nevertheless, because of the usability issues in the areas we identified (for example, not having a clear and “up-front” statement of what the site contains and how it can be used), consumers may not use all of the site’s available features and be dissuaded from completing and submitting reports. CPSC officials also acknowledged that awareness of the agency could be heightened if consumers were informed about CPSC while using or searching SaferProducts.gov. CPSC has used many approaches to inform the public about SaferProducts.gov, employing many key practices of consumer education planning in the process. Incorporating the promotion of SaferProducts.gov into its broader effort to increase awareness of the agency has represented a logical approach that has prevented duplication. However, our work confirms CPSC’s perception that public awareness of SaferProducts.gov is likely low. For example, none of the participants in our usability tests had heard of SaferProducts.gov prior to the testing. Although CPSC has employed many of the key practices for consumer education planning, it has not established metrics to measure the success of its efforts. By establishing such metrics, the agency would be better able to determine which of its outreach efforts had the most impact on increasing awareness and use of the site and thus could more effectively target its limited resources to increase use of the site. In addition to establishing and using metrics, more data about the use of SaferProducts.gov could help CPSC target its marketing and outreach. Currently, CPSC collects limited data about the use of SaferProducts.gov. For example, it collects data on the number of visitors, but not whether they are using the site to search for information—one of the main functions of the site. It also does not collect demographic data about the users’ age, gender, or income. These types of data could help CPSC identify groups, populations, or areas on which to focus to further increase use of the site. Our usability testing with consumers identified other ways in which CPSC may increase the use of SaferProducts.gov. Although our testing revealed that many consumers found the site generally easy to use, it also revealed that certain search functions, site registration, and lack of a clear statement of purpose posed challenges for some users. By improving the site in these areas, CPSC could help ensure that consumers take advantage of all the features of the site and are able to search for and report information in an easy and convenient manner. Making these improvements also may provide CPSC with additional reports from consumers to inform its safety assessments and other regulatory efforts. To improve the awareness, use, and usefulness of SaferProducts.gov, CPSC should take the following three actions: establish and incorporate metrics to assess efforts to increase awareness and use of SaferProducts.gov, look for cost-effective ways of gathering additional data about the users and their use of SaferProducts.gov, and implement cost-effective usability improvements to SaferProducts.gov, taking into account the results of any existing usability testing or any new testing CPSC may choose to conduct. We provided a draft of this report to CPSC for review and comment. In commenting on the draft report, the Chairman and Commissioners stated that they support the report’s recommendations. Specifically, they stated that CPSC staff will look for cost-effective ways to improve awareness of SaferProducts.gov, improve the usability of the site based on research on best practices in web design, and gather additional metrics about users. The Chairman and Commissioners’ comments are reprinted in appendix III. CPSC also provided technical comments that we incorporated in the report as appropriate. We are sending copies of this report to interested congressional committees and to the Chairman and Commissioners of CPSC. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The objectives of our report were to examine (1) the Consumer Product Safety Commission’s (CPSC) efforts to inform the public about SaferProducts.gov, (2) who has been using the website and to what extent, and (3) the extent to which consumers have found the website to be useful. For the first objective, we reviewed CPSC marketing, budget, evaluation, and planning documents to determine the status of the agency’s public information efforts related to SaferProducts.gov. We sought to determine whom CPSC had been targeting; how, if at all, the agency was evaluating the outcomes of its efforts; and any future plans to promote awareness and use of the site. We compared CPSC’s efforts with criteria on key practices for consumer education planning. We interviewed consumer product safety experts from nine groups representing consumers, researchers, and various industries to determine what additional steps, if any, CPSC could take to better inform the public about SaferProducts.gov. We identified these experts through our prior work or based on recommendations from those we interviewed. For context, we interviewed an official in CPSC’s Office of Communications and reviewed CPSC’s strategies to increase awareness of the agency as a whole. See GAO-12-30. reliability and determined that, for the purposes of this report, the data were sufficiently reliable. We did not review specific incident descriptions that individuals filed and do not attest to the reliability of that information. For the third objective, we conducted website usability tests with 37 consumers—who represented a mix of demographic characteristics in terms of age, gender, and educational level—to obtain their views on how easy it was to use SaferProducts.gov and how useful they found the website. We conducted the tests in Washington, D.C., Dallas, Texas, and San Francisco, California. We chose these locations for geographic dispersion and ease of testing. We followed the protocols and used the Washington, D.C. facilities of the General Services Administration (GSA) for our testing conducted through the First Fridays Usability Testing Program. At this location, GSA recruited three volunteer testers on our behalf. Consistent with the GSA program protocols, a moderator facilitated the testers’ execution of various website tasks, such as searching for recalled products and submitting mock incident reports. We followed similar protocols in San Francisco and Dallas. To identify the participants in San Francisco and Dallas, we worked with a contractor to recruit prospective testers who had a mix of demographic characteristics. We held two focus groups in each location, for a total of four groups. Two groups had eight participants per group and the other two groups had nine participants per group. In all four groups, a moderator facilitated the testers’ execution of various tasks, as was done in Washington, D.C. Although the results of our usability tests are not generalizable to all U.S. consumers, they provided us with in-depth, interactive feedback and detailed perspectives from a range of website users about the usability challenges associated with SaferProducts.gov. To supplement our approach, we requested and reviewed an expert evaluation conducted by the First Fridays program manager. The GSA official evaluated SaferProducts.gov based on the following criteria: (1) accessibility—the ability of people with physical or mental disabilities to use the site; (2) identity and purpose—whether the site clearly presents its purpose, including what the site offers and what a user can do on it; (3) clarity—the ability to read and digest content; (4) navigation—how easily users can find information; and (5) design and content—focusing on the layout, headers, and design. Another GSA official provided a more in- depth accessibility review of SaferProducts.gov to identify issues that users with disabilities might encounter when navigating the site. According to GSA, although an expert evaluation can be a useful starting point for determining a website’s usability strengths and weaknesses, the expert evaluation emphasizes the importance of the user experience. In addition, we reviewed various other website usability resources and criteria, including Usability.gov, to understand the key practices for making websites easy to use and helpful. We conducted this performance audit from July 2012 to March 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Debra Johnson, Assistant Director; Meghana Acharya; Mark Bird; William Carrigg; Jeremy Cluchey; Meredith Graves; Ronald Ito; Sarah Kaczmarek; May Lee; Marc Molino; Patricia Moye; Barbara Roesmann; Andrew Stavisky; and Julie Trinder made key contributions to this report. | In the wake of increased product recalls in 2007-2008, Congress passed the Consumer Product Safety Improvement Act of 2008 (CPSIA). Among other things, CPSIA required CPSC to establish a database on the safety of consumer products that is publicly available, searchable, and accessible through the CPSC website. In response, CPSC launched SaferProducts.gov (http:// www.saferproducts.gov ) in March 2011, which has two main functions--to provide (1) a mechanism for online reporting of product safety issues and (2) the ability to search for these issues or others, such as recalls. CPSIA also required GAO to study the general utility of the website. This report examines (1) CPSC's efforts to inform the public about SaferProducts.gov, (2) who is using the website and to what extent, and (3) the extent to which consumers have found the website to be useful. To do this, GAO analyzed agency documents and data from 2011 to 2012; interviewed CPSC officials, researchers, and consumer and industry groups; reviewed federal standards, guidance, and best practices for website usability; and conducted website usability tests with 37 consumers in three locations. The Consumer Product Safety Commission (CPSC) has used various approaches to inform the public about SaferProducts.gov, including using social media, public service announcements, and printed materials, and promoting the site during speeches and events. CPSC's efforts to inform the public about SaferProducts.gov have been part of a larger effort to raise awareness about the agency as a whole. While CPSC has employed many key practices for consumer education planning, it has not established metrics for measuring the success of its efforts. Without such metrics, the agency cannot determine which efforts have had the most impact on increasing awareness and use of the site. While CPSC collects some data on the category of persons, such as consumers or health care professionals, who submit reports (one of the main functions of the site), it does not collect data about who is using the site to search for information (the other main function). In addition, to minimize the reporting burden on users, CPSC has not asked for demographic data about the users (such as their age, gender, or income level). Therefore, it was difficult for GAO to assess, as mandated by Congress, whether a broad range of the public has used the site. Moreover, without such data, CPSC has been limited in its ability to target its marketing and outreach efforts to increase use of the site. Many consumers in GAO's usability tests thought the site generally was easy to use and had helpful information, but identified areas for improvement. The consumers generally could perform basic searches and follow instructions to report an unsafe product, and although none were aware of the site before the tests, most said they would use the site again. However, some of the search functions posed challenges. In addition, some consumers expressed concern about registering with the site and said this might prevent them from completing a report. Other consumers were not clear about the site's purpose, thinking it would focus on safe rather than unsafe products. By addressing the usability challenges GAO identified, CPSC could help users take full advantage of all the available features of SaferProducts.gov. Furthermore, cost-effective federal resources exist across the government to help agencies improve the usefulness of their sites. CPSC should (1) establish and incorporate metrics to assess efforts to increase awareness and use of SaferProducts.gov, (2) look for cost-effective ways of gathering additional data about site use, and (3) implement cost-effective usability improvements to the site. CPSC supported these recommendations. |
GPRAMA added new requirements for setting and achieving agency goals. Every 2 years, GPRAMA requires the heads of certain agencies to identify a subset of agency goals to be identified as APGs. These goals are to reflect the highest priorities of the agency and are to have clearly- defined milestones and ambitious targets that can be achieved within 2 years. GPRAMA also directs agencies to describe how goals are to be achieved, including the processes, training, skills and technology, and the human, capital, information, and other resources and strategies required to meet them. In addition, they are to identify performance indicators to be used in measuring progress, among other things. The first round of APGs was publicly reported in February 2012, and they were to be completed by the end of fiscal year 2013. New APGs, for 2014 and 2015, were published on Performance.gov in March 2014. According to OMB staff, approximately 40 percent of the APGs for 2014 and 2015 were carried over from and focus on the same areas as the APGs from 2012 and 2013, although they typically have updated targets for achievement. In conjunction with these goal-setting requirements, GPRAMA also established a requirement that, for each APG, agencies identify an agency official–the goal leader–responsible for goal achievement. GPRAMA and OMB guidance describe responsibilities and specific tasks for goal leaders, including that they participate in quarterly performance reviews (QPR) (see table 1). OMB also directs agencies to identify a deputy goal leader to support the goal leader, though it does not provide details of the responsibilities or specific tasks of the deputy position. In addition, the Office of Personnel Management (OPM) has identified core competencies for key roles under GPRAMA, including the goal leader. These competencies are illustrated in figure 1. OPM has also identified executive core qualifications for Senior Executive Service officials. Some of the competencies for the executive core qualification for building coalitions, such as partnering, are the same as those OPM identified as core competencies for agency priority goal leaders. Several of these competencies are also consistent with ones we identified in our For prior work as important attributes for collaborative leadership.example, these leadership competencies included communication, building and maintaining relationships, and setting a vision. Under GPRAMA agencies are to make information on APGs available to OMB for online publication. This information, which includes APG strategies, performance indicators, progress updates, and identification of the relevant goal leader, is published on Performance.gov. Agencies are to update information on each APG at least quarterly. Agencies are also to identify programs and activities that contribute to each APG. These include organizations, program activities, regulations, policies, and other activities, both internal and external to the agency. OMB also directs agencies to include, as appropriate, tax expenditures in their identification of programs that contribute to their APGs. GPRAMA also established in law the Performance Improvement Council (PIC), chaired by OMB’s Deputy Director for Management and composed of performance improvement officers from various federal agencies. According to PIC staff, officials from a broad range of agencies, both large and small, participate in the PIC. The PIC is charged with facilitating the exchange of successful performance management practices among agencies and assisting OMB in implementing certain GPRAMA requirements (among other responsibilities). Although this report focuses on the goal leaders for APGs, similar positions exist for other types of goals as well. At the agency level, OMB directs agencies to designate leaders for each agency strategic objective, which reflects the outcomes or impacts the agency is intending to achieve. OMB’s 2013 guidance directs agencies to conduct annual strategic reviews of progress toward strategic objectives to inform their decision making, starting in 2014. At the government-wide level, GPRAMA requires a lead official to be identified for each cross-agency priority (CAP) goal. CAP goals either focus on issues that cut across agency boundaries or on management improvement across the federal government. Each of the 15 CAP goals announced in the President’s 2015 budget has a leader within the Executive Office of the President and within a related agency. Although these other types of goal leaders exist, throughout this report, when we refer to goal leaders we are only referring to goal leaders for the APGs. The goal leaders for the goals in our sample were, in general, placed at high levels within their agencies and so were in senior leadership positions that enabled them to drive progress on their APGs. For example, of the 46 goal leaders we interviewed, 7 were heads or acting heads of agencies and 9 were agency assistant secretaries or acting assistant secretaries. Most (28) of the 46 goal leaders we interviewed were in career positions and the remainder (18) were political appointees. This high placement of goal leaders is consistent with OMB guidance, which states that goal leaders should be officials with the authority to coordinate across an agency or program. It is also a practice supported by our prior work, in which we noted that personal involvement of top agency leadership is important to successful management. The Administrator of the Federal Aviation Administration (FAA), who is the goal leader for the Department of Transportation’s (DOT) Reduce Risk of Aviation Accidents APG, told us that, in his opinion, having a goal leader who is also the head of an agency is important because he or she has the authority to engage agency employees to achieve the goal. As an example, FAA officials provided us with a list of the Administrator’s site visits to FAA facilities around the country. According to DOT, aviation fatality rates—this APG’s performance measure—are at historic lows and have continued to drop over time.the importance of having goal leaders at a high enough level that they have authority within their agencies, some noted the importance of balancing that with the need for a goal leader who has the time for close, regular involvement with the goal. While officials we interviewed noted Goal leaders we interviewed provided many examples of ways in which they fulfill goal leader responsibilities required by OMB guidance, including laying out strategies to achieve the goal, managing execution, regularly reviewing performance, engaging others as needed, and making course corrections as appropriate. They also engage in other activities. For example, the Associate Administrator of the Office of Disaster Assistance at the Small Business Administration (SBA), who is also the goal leader for the Process Disaster Assistance Applications Efficiently APG, told us that he and others at SBA started developing strategies and targets in 2011, prior to the beginning of the fiscal year 2012-2013 agency priority goal cycle. He said that it was important to consider reasonable targets as part of setting and implementing the APG. This meant determining a feasible target for the percentage of disaster assistance applications the agency would aim to receive electronically as part of goal setting. SBA reported on Performance.gov that the rate of applications filed electronically had more than doubled from fiscal year 2011 to 2013, when 55 percent of applications were filed electronically. In addition, the Department of the Treasury’s (Treasury) Fiscal Assistant Secretary, who was also the goal leader for the agency’s Increase Electronic Transactions with the Public to Improve Service, Prevent Fraud, and Reduce Costs APG, said that a significant part of his work involved providing information to the public through press releases and working with consumer and community groups to make them aware of the changes coming with regard to electronic transactions. Treasury officials provided us with examples of press releases and other public communications, as well as a list of the Fiscal Assistant Secretary’s meetings with advocacy groups and other stakeholders related to the goal. The fiscal year 2015 President’s budget highlighted progress made under this goal, noting that increased electronic transactions had helped the government get money to beneficiaries and into the economy faster, and reduced costs associated with collections. GAO-13-228. In this report we identified several leading practices for conducting QPRs through a review of relevant academic and policy literature, and with input from practitioners at the local, state, and federal levels. through 2013. In addition, the Bureau of Indian Affairs (BIA) within the Department of the Interior (Interior) had been collecting data on crime before the Reduce Violent Crime in Indian Communities goal was designated an APG. But, officials told us that the APG designation led to an increased focus on data use. For example, officials developed crime rate profiles identifying locations where crimes were historically committed, including information on times of day and days of week, and used that information to shift police assignments and target proactive measures. According to information reported by Interior on Performance.gov, BIA, working with tribes, met most of its violent crime rate targets in fiscal years 2012 and 2013 for five of the six communities originally targeted under the APG. A majority of the goal leaders we interviewed said the goal leader designation had positive effects on goal progress and achievement. Goal leaders and other officials identified one or more benefits that derived from the designation, frequently saying that it provided greater visibility for the goal, facilitated coordination, heightened focus on the goal, or improved access to resources. For example, a deputy goal leader at the Federal Railroad Administration, part of DOT, said that goal leader and deputy goal leader designations are valuable in the context of competing priorities at the department. He said that the designation and related requirements elevate the goal, provide additional structure, and communicate the department’s commitment to it. DOT reported on Performance.gov that the agency initiated construction on five passenger rail corridors and 37 individual projects during fiscal years 2012 and 2013 for the deputy goal leader’s Advance the Development of Passenger Rail in the United States APG. Additionally, the Assistant Secretary for Elementary and Secondary Education, who is the goal leader for two of the Department of Education’s (Education) APGs–Demonstrate Progress in Turning Around the Nation’s Lowest-Performing Schools and Improve Outcomes for all Children from Birth through Third Grade–said that her designation as goal leader has caused her to think more about the goals and how they relate to other aspects of the department’s work and to think more strategically across the whole Office of Elementary and Secondary Education. According to data reported by Education on Performance.gov, under the APG targeting the lowest-performing schools (489 schools receiving Education’s School Improvement Grants), those schools demonstrated at least 10 percent increases in reading or math scores during fiscal years 2012 and 2013. Education also reported that under the APG focusing on improved outcomes, 24 states implemented plans to collect and report disaggregated data on children entering public school kindergarten as part of a comprehensive assessment system. About a third of the goal leaders we interviewed told us that the goal leader designation did not affect goal achievement. In several cases, goal leaders noted the difficulty they had differentiating their role as goal leader from the work they would otherwise be doing as part of their other roles. For example, the Department of Housing and Urban Development’s (HUD) Acting Assistant Secretary of the Office of Community Planning and Development, who was one of the goal leaders for the agency’s Reducing Homelessness APG, told us that the goal leader designation was more a description of his continuing role leading agency efforts on homelessness rather than a significant change in responsibility or position. In addition, some APGs continued work begun under other, prior agency goals, and in some cases represent issues that have been agency priorities for years. For example, several of the agency priorities represented by the APGs in our sample had also been reflected in agency high priority performance goals, which were in place prior to GPRAMA’s enactment. Several goal leaders cited the close correspondence between the APG focus and agency mission as a reason that the designation of goal leaders made little difference in how the agency carried out its work in support of the goal. For example, the Acting Director of the U.S. Patent and Trademark Office, who was also the goal leader for the Department of Commerce’s Advance Commercialization of New Technologies by Reducing Patent Application Pendency and Backlog APG, told us that the goal leader designation did not have a significant effect because the APG is a firmly-established part of her agency’s mission. According to information reported by the Department of Commerce on Performance.gov, the agency reduced pendency for first and final actions and reduced its patent backlog during fiscal years 2012 and 2013. For more information, see http://archive-goals.Performance.gov/goal_detail/HHS/373. funded water conservation projects that enabled nearly 250,000 acre feet of water savings during fiscal years 2012 and 2013. OMB guidance directs agencies to identify a deputy goal leader to support each goal leader, and agencies naming a political appointee as a goal leader are encouraged to name a career senior executive as the deputy. Twenty-eight of the 46 goal leaders we interviewed were in career positions and 18 were political appointees. OMB staff told us that this guidance is in place for several reasons. Primarily, it is due to the importance of having a person in place who is close to the work being done on APGs and who can spend the time needed to implement and follow up on related tasks. OMB staff said that the agency sees the deputy goal leader as the person who can perform the important function of connecting APG leadership and strategy with actual implementation. OMB also directed agencies to appoint deputy goal leaders because the position may help provide continuity in the event that the goal leader leaves the agency, and also provides a point of contact for OMB, particularly in situations in which the designated goal leader is very highly placed in the agency. OMB staff told us that OMB does not monitor whether agencies formally designate deputies or systematically collect information, including contact information, on them. Instead, staff said that they identify and use deputy goal leaders as points of contact when they find it necessary. Deputy goal leaders for the APGs in our sample supported day-to-day goal management and provided continuity during times of transition. Most (35 of 46) of the goal leaders we interviewed had deputy goal leaders.These goal leaders and their deputies most commonly characterized deputies’ roles as involving day-to-day management of the goal or managing data collection, analysis, and presentation for QPRs and other purposes. In some cases deputies also participated in budget management, and developing goals or strategies. Deputies also played a role during goal leader transitions. For example, the Department of Justice’s (DOJ) National Coordinator for Child Exploitation, Prevention and Interdiction, who also leads the agency’s Protect Those Most in Need of Help—With Special Emphasis on Child Exploitation and Civil Rights APG, explained that her deputy goal leader had provided useful support when she started in the goal leader position, especially in terms of helping her to understand APG performance measures and how they fit into her work. Additionally, two of the goal leaders we interviewed had originally been deputy goal leaders, and had taken on the goal leader role when the previous goal leader left the position. Our analysis of APGs found that there had been somewhat higher rates of turnover among goal leaders than deputies between the time the APGs were published in February 2012 and completed at the end of September 2013. Of the 47 APGs in our sample, 20 (slightly more than 40 percent) had a change in goal leader during this time period. Of the 37 APGs that had deputy goal leaders assigned, 11 (31 percent) had a change in deputy goal leader over this time period. Although most of the goal leaders we interviewed had formal deputy goal leaders in place, 11 of the 46 (24 percent) did not. Those goal leaders who did not have deputy goal leaders generally reported that they had other staff who fulfilled similar roles. For example, none of the goal leaders at HUD have deputy goal leaders, but the Director of HUD’s Performance Management Division told us that an analyst from the agency’s Office of Strategic Planning and Management is assigned to each APG. The analyst ensures that APGs comply with GPRAMA requirements. Although other agency staff may fulfill many of the roles that a deputy goal leader would, officially designating a deputy goal leader would be consistent with OMB’s view that deputies serve a key role in implementing APGs. In addition, this designation is especially important if there is additional turnover in the goal leader position in the future. or serving in them temporarily, so we excluded these plans from our analysis. In all, we obtained and analyzed 32 goal leader performance plans. We obtained performance plans from all 38 deputy goal leaders for the APGs in our sample, but excluded three from our analysis because the officials were new to the position and their performance plans had not been updated to reflect it. In all, we analyzed 35 deputy goal leader performance plans. According to our assessment of goal leaders’ performance plans, agencies are not fully using performance plans as a tool to align APGs and goal leader expectations, or make goal leaders accountable for progress on and achievement of APGs. The 32 goal leader performance plans we analyzed reflected responsibility for goal outcomes to a varying extent. They covered a range of responsibilities, but many did not reference the APG at all, and few of them explicitly held goal leaders responsible for goal outcomes. Specifically, less than half (12 of 32) of the plans specified that the official was responsible for the APG, and 1 of the 32 linked goal leaders’ performance standards to goal outcomes. Some goal leaders said that although their plans did not include specific references to their APGs, responsibility was implied because the APG was covered under broader responsibilities. We found that this was the case in almost half (14 of 32) of the plans we analyzed. Figures 2 and 3 show excerpts from two goal leaders’ performance plans. The performance plan in figure 2 clearly links performance standards to specific APG outcomes, while the performance plan in figure 3 has a weaker connection to the two APGs under the goal leader’s responsibility. For example, the performance plan does not specifically mention the APG or any of its performance measures. Our analysis of deputies’ performance plans showed that they also reflected responsibility for APGs to a varying extent. In general, the 35 deputy goal leader performance plans we reviewed did not establish a strong connection to APGs and did not make deputies accountable for goal outcomes. Our analysis found that 15 of 35 plans named the officials as the deputy goal leader or explicitly made them responsible for the APG and 1 of 35 linked deputies’ performance standards to goal outcomes. Some officials noted that although deputies’ performance plans did not specifically mention APGs, they did include responsibility for activities that supported the goal. We found this to be the case for nearly half of the performance plans we reviewed. Specifically, 14 deputies’ plans identified them as responsible for activities that could contribute to the goal, but did not reference the goal, goal outcomes, or broader areas of activity that could subsume the goal. For example, the performance plan for one of the deputy goal leaders on OPM’s Ensure High Quality Federal Employees APG specifies that she is responsible for improving and implementing guidance related to the Pathways Program, a streamlined process for hiring new federal employees. Agencies that use performance plans that lack a strong linkage to APG outcomes may be missing opportunities to promote accountability. Although APGs may fall under goal leaders’ broader responsibilities, specifically including the APG and its outcomes in goal leaders’ and deputies’ performance plans would help ensure that they are evaluated on and held accountable for goal progress. Our prior work makes clear the importance of tying performance standards to agency goals and making leadership accountable for goal outcomes. This is especially important, since APGs are to reflect the highest priorities of the agency. While not all deputy and goal leader performance plans had clearly stated relationships to APGs, goal leaders we interviewed identified other mechanisms that they felt held them accountable for progress on APGs. These include personal reputation, accountability to agency and other leadership, and QPR meetings. For example, the Assistant Secretary of Labor for Occupational Safety and Health, who is goal leader for two APGs (Reduce Worker Fatalities and Develop a Model Safety and Return-to-Work Program), told us that the interest of Congress and DOL’s Inspector General, in their respective oversight roles, both operate to hold him accountable. In addition, the Federal Railroad Administrator, who is the goal leader for the Department of Transportation’s (DOT) Advance the Development of Passenger Rail in the United States APG, told us that the agency’s regulatory review meetings, which the agency also uses to meet the GPRAMA requirement that agencies review priority goal progress at least quarterly, act as an additional accountability mechanism. He said that the Deputy Secretary and other agency officials place considerable importance on these meetings and that it was personally important to him that he perform well at them, including being able to respond to all questions. While these other mechanisms offer additional accountability, our previous work has shown that performance plans offer particular benefits. For example, performance plans can allow performance managers to (1) document that performance standards align with agency goals; (2) hold employees accountable for achieving specific and measurable results; and (3) make clear distinctions in performance. Many of the meaningful results that the federal government seeks to achieve require the coordinated efforts of more than one federal agency, level of government, or sector. OMB’s guidance states that goal leaders should be authorized to coordinate across an agency or program to achieve progress on a goal. Our recent work on interagency collaboration identified leadership competencies related to collaborating effectively, including the ability to work well with others, build and maintain relationships, and communicate openly with a range of stakeholders. The goal leaders we interviewed reported many examples of collaborating with other entities to drive progress on their APGs. Table 2 provides some examples of how goal leaders reported collaborating with a diverse array of entities. OMB guidance and leading practices for conducting quarterly performance reviews (QPR) note the importance of agencies including all relevant contributors to APGs in these reviews. As shown below, we have made a prior related recommendation. GAO Has Previously Recommended That OMB and the PIC Help Agencies Include External Contributors in Quarterly Performance Reviews In February 2013, we made the following recommendation. OMB staff agreed with our recommendation. As of June 2014, OMB and Performance Improvement Council (PIC) staff reported that agencies continue to work to implement this recommendation through a PIC working group that is intended to help agencies share best practices for conducting QPRs, but did not have a specific timeframe in place for full implementation: To better leverage agency quarterly performance reviews as a mechanism to manage performance toward agency priority and other agency-level performance goals, the Director of OMB—working with the PIC and other relevant groups—should identify and share promising practices to help agencies extend their QPRs to include, as relevant, representatives from outside organizations that contribute to achieving their agency performance goals. However, some goal leaders we interviewed reported that APG contributors from other federal agencies, and even different components within the same federal agency, were not included in these reviews. The goal leader for the Department of the Interior’s (Interior) Reduce Violent Crime in Indian Communities APG–the Assistant Secretary for Indian Affairs–reported that he and his staff work with the Department of Justice (DOJ) on the goal, but that DOJ officials are not part of Interior’s QPRs.Interior officials explained that they meet with DOJ officials in other settings. In another example, the Department of Labor (DOL) has designated two officials–the Assistant Secretary for Occupational Safety and Health and the Assistant Secretary for Mine Safety and Health–as co-goal leaders for the Reduce Worker Fatalities APG. The Assistant Secretary for Mine Safety and Health told us that he and the Assistant Secretary for Occupational Safety and Health have separate QPRs with the Deputy Secretary to discuss progress on their joint APG. In commenting on a draft of this report, DOL officials noted that this is because the Mine Safety and Health Administration covers only mining and the Occupational Safety and Health Administration covers other industries, and that the two programs have separate outcome metrics for the APG reported on Performance.gov. However, the Assistant Secretary for Mine Safety and Health told us that Occupational Safety and Health Administration officials do not attend the reviews of the Mine Safety and Health Administration. OMB guidance and our prior work state that agencies should, as appropriate, include relevant personnel within and outside the agency who contribute to the accomplishment of each APG. Our interviews with goal leaders did, however, identify one case in which a goal leader reported that external contributors to the goal attended the agency’s QPR meeting. Two Department of Housing and Urban Development (HUD) goal leaders who share responsibility for the Reducing Homelessness APG, which includes reducing homelessness among veterans, reported collaborating with officials from the U.S. Interagency Council on Homelessness and the Department of Veterans Affairs, which also has an APG focused on reducing veterans’ homelessness. HUD officials told us that every 6 months officials from the U.S. Interagency Council on Homelessness and the Department of Veterans Affairs attend HUD’s QPRs. driven performance reviews, OMB officials cited this example of two agencies that had been using the QPRs to collaborate on their APGs. In our previous work on data- In cases where relevant personnel were not included in QPRs, the goal leaders we interviewed emphasized that their agencies use other means to collaborate with external contributors to their APGs. However, leading practices for conducting QPRs underscore the value of agencies extending their QPRs to include, as relevant, external contributors. We observed that these reviews have the benefit of bringing together the leadership and all the key players to solve problems. Moreover, when key players are excluded from QPRs, agencies will need to rely on potentially duplicative parallel coordination mechanisms. Thus, we continue to believe our prior recommendation has merit, and that OMB and PIC efforts to work with agencies to fully address it would strengthen these QPRs. HUD refers to their QPRs as “HUDStat.” GPRAMA requires the agency head and chief operating officer, with the support of the performance improvement officer, to assess whether relevant organizations, program activities, regulations, policies, and other activities are contributing as planned to the agency’s APGs, and to identify these contributing programs for publication on Performance.gov. Our previous work has identified more specific examples of different program types the federal government uses to achieve many of its goals: direct services, government contracts, grants, regulations, research and development, and tax expenditures. Table 3 provides more specific definitions of these program types and also provides examples from our interviews of how goal leaders reported different program types contributed to their APGs. GPRAMA requires that agencies do more than simply identify the different program types that contribute to an APG. An agency must also assess whether each relevant program type is contributing as planned to the APG. This requirement is important because our previous work has identified long-standing difficulties agencies have faced in measuring performance across various program types. For example, in prior work, we have found that some grant-making agencies have faced difficulties in validating and verifying the data grant recipients report and establishing performance measures.recommendation. GAO Has Previously Recommended That OMB and the PIC Help Agencies Share Information Related to Measuring Program Types In June 2013, we made the following recommendation. As of June 2014, OMB and PIC staff reported taking initial steps to implement this recommendation through action plans they are developing for certain cross-agency priority goals, including those focused on customer service and strategic sourcing. OMB staff did not have a specific time frame in place for fully implementing the recommendation. “Given the common, long-standing difficulties agencies continue to face in measuring the performance of various types of federal programs and activities—contracts, direct services, grants, regulations, research and development, and tax expenditures—we… recommend the Director of OMB work with the PIC to develop a detailed approach to examine these difficulties across agencies, including identifying and sharing any promising practices from agencies that have overcome difficulties in measuring the performance of these program types. This approach should include goals, planned actions, and deliverables along with specific time frames for their completion, as well as the identification of the parties responsible for each action and deliverable.” Our discussions with goal leaders identified some examples of specific analyses agencies had done to assess the contributions a particular program type made to APGs. For example, Interior’s Deputy Secretary reported that the Bureau of Reclamation (Reclamation) is requiring specific information from grant applicants on how to quantify the benefits, such as water savings, if the federal government selects a particular project to receive funding under Interior’s Increase the Available Water Supply in the Western States APG.estimates grant applicants provide to measure progress towards its APG, as described in the text box below. How Interior is Analyzing the Contributions a Grant Made to its APG An irrigation district in Montana requested $103,000 in federal grant money for a project the district estimated would save 4,158 acre feet of water each year. Interior says a team of Reclamation employees assessed the district’s water savings estimate and decided the project would receive federal funding. After a financial assistance agreement was executed, Interior officials then reported the 4,158 acre feet of estimated water savings towards the department’s overall goal of saving a cumulative total of 350,000 acre feet of water by the end of fiscal year 2011. The Interior report adds that Reclamation employees are seeking to validate the estimates for certain projects by taking measurements before construction begins and then again after construction to measure actual water savings. For the APG for fiscal years 2012 and 2013, Interior measures cumulative water savings since 2009, so this project contributed to the APG. However, other goal leaders we interviewed reported that their agencies are either not analyzing the contributions different program types are making or, in other cases, continue to face challenges in measuring the performance of different program types. For example, HUD officials we interviewed told us that grants contribute to the Preserve Affordable Rental Housing APG, but that grants have not been a significant part of HUD’s conversations about goal progress. The information HUD provided on Performance.gov indicated that grants are used in combination with other programs, such as vouchers and tax expenditures, to provide rental assistance. However, the performance indicators HUD identified on Performance.gov measured the total number of families served by different HUD programs. These indicators did not break out the number of families served through grants, thereby making it more difficult to analyze the contributions grants made to the APG. In other cases, goal leaders described challenges they have faced in measuring the performance of certain types of programs. For example, the Federal Aviation Administration’s (FAA) Associate Administrator for Aviation Safety observed that it is difficult to know for certain whether a particular regulation prevented an aviation accident from occurring in regards to DOT’s Reduce Risk of Aviation Accidents APG. She emphasized that there has been a lot of progress made in improving aviation safety and pointed to data that show a very low risk of passenger fatalities on commercial air carriers. However, she considered it hard to measure the precise effect a particular regulation or safety initiative had on the outcome that has been identified for this APG. We discussed with several goal leaders the extent to which they have shared information with officials in other agencies working with similar types of programs, such as grants and regulations, on common challenges in measuring the performance of these types of programs. Overall, our discussions did not identify any government-wide working groups that would allow officials from different agencies to share this sort of information, but a small number of goal leaders interviewed identified some more limited mechanisms for sharing this information: FAA’s deputy goal leader for DOT’s Reduce Risk of Aviation Accidents APG told us that he also sits on the department’s Safety Council, which provides a forum for FAA officials to share their regulatory experience with other DOT agencies, including the Federal Transit Administration–which has less experience with regulations. The DOT deputy goal leader reported that the benefits of attending the Safety Council meetings are learning from and sharing experiences between and among departmental agencies and being able to apply this information to the agency priority goal. DOT officials noted that the Safety Council is limited to DOT agencies and does not include other federal agencies with regulatory responsibilities. Social Security Administration (SSA) officials who were responsible for the Reduce Supplemental Security Income Overpayments APG told us they had participated in a now-inactive PIC working group on benefits processing during the tenure of an earlier goal leader. The Benefits Processing working group focused on promoting consistency in agencies’ benefits processing, but OMB staff had previously told us that this group no longer regularly meets because it had completed its tasks.been helpful. The SSA officials we interviewed indicated that this group had We continue to believe our prior recommendation has merit, and that OMB and PIC actions to fully address it would provide a more comprehensive assessment of how various types of programs contribute to agency goals. Since 2012, OMB guidance has directed agencies to identify as appropriate the tax expenditures that contribute to their APGs and report this information for publication on Performance.gov. This is important because tax expenditures represent a significant federal investment. Based on Department of the Treasury (Treasury) estimates for fiscal year 2013, the federal government had forgone approximately $1.1 trillion in tax revenue through 169 tax expenditures, an amount which approaches the size of federal discretionary spending. The tax revenue that the government forgoes is viewed by many analysts as spending channeled through the tax system. Since 1994, we have recommended greater scrutiny of tax expenditures, as periodic reviews could help determine how well specific tax expenditures work to achieve their goals and how their benefits and costs compare to those of spending programs with similar goals. In 2005, we recommended that OMB, in consultation with Treasury, more fully incorporate tax expenditures into federal Since then, OMB guidance has shown some performance management.progress in addressing how agencies should incorporate tax expenditures in strategic plans and annual performance plans and reports. OMB addressed this recommendation by updating its guidance in 2012 to require agencies to identify appropriate tax expenditures, as described above. OMB also reported in its 2013 update to this guidance that it will work with Treasury to align tax expenditure information with the APGs. However, our 2013 review of the extent to which agencies implemented certain requirements related to the APGs indicated that OMB and agencies may be missing opportunities to identify tax expenditures that contribute to APGs. In our 2013 report on APGs, we found that only one agency had identified two relevant tax expenditures for one of its APGs. As shown below, we made a recommendation to address this issue. GAO Has Previously Recommended That OMB Ensure Agencies Identify Tax Expenditures That Contribute to Agency Priority Goals In April 2013, we made the following recommendation. OMB staff agreed with our recommendation and said in June 2014 that they are working to implement the recommendation, for example, by engaging with staff at the Department of the Treasury. As of June 2014, OMB staff did not have a specific time frame for fully addressing the recommendation. “As OMB works with agencies to enhance Performance.gov to include additional information about APGs, we recommend that the Director of OMB ensure that agencies adhere to OMB’s guidance for website updates by providing complete information about the organizations, program activities, regulations, tax expenditures, policies, and other activities—both within and external to the agency—that contribute to each APG.” Our review of the information agencies provided on Performance.gov and our discussions with goal leaders indicated that agencies and goal leaders continue to provide Congress and the public with limited information about the contributions of tax expenditures on Performance.gov. Specifically, we found that five APGs in our sample had related tax expenditures based on our prior work. However, for a variety of reasons, only the two discussed below identified relevant tax expenditures on Performance.gov: HUD identified tax credits that subsidize the building and rehabilitation of rental housing as contributing programs to the Preserve Affordable Rental Housing APG. Although the Department of Energy did not identify them as contributing programs, the department mentioned two tax expenditures in its discussion of how it measured performance of the Make Solar Energy as Cheap as Traditional Sources of Electricity APG. For a third APG, HUD did not identify relevant tax expenditures as contributing to its Prevent Foreclosures APG on Performance.gov. But, one of the APG’s goal leaders acknowledged the relevance of a tax expenditure and told us that the agency had worked to support its extension. Specifically, HUD’s Deputy Assistant Secretary for the Office of Single Family Housing noted that borrowers could potentially owe taxes if HUD’s efforts resulted in their receiving a reduction in the Our previous work noted that Congress principal on their mortgage. enacted a tax expenditure in 2007 that allowed taxpayers to generally exclude from taxable income forgiven mortgage debt to assist taxpayers facing foreclosure.consulted with Treasury to suggest that this tax expenditure be extended by Congress. HUD officials reported that their agency had We have previously found that tax expenditures relate to other APGs in our sample. However, for these APGs, agencies chose measures to assess goal progress and achievement that did not involve tax expenditures. As a result, agencies did not include information on tax expenditures related to the APGs on Performance.gov. The Department of Education Improve Students’ Ability to Afford and Complete College APG: We have previously described the importance of tax expenditures for helping students and families pay for college. However, the deputy goal leader, who is the Chief of Staff in the Office of the Undersecretary of Education, told us that tax expenditures were not a factor in his agency’s strategy to implement this goal, which was primarily to develop a web-based college scorecard that is intended to help users learn about a specific college’s affordability and value. The deputy goal leader explained that Congress specified in statute a specific formula for his agency to use to calculate the average net price of attending a particular college, which does not include tax expenditures that may reduce the net price many students pay. The Environmental Protection Agency (EPA) Reduce Greenhouse Gas Emissions from Cars and Trucks APG: EPA officials told us that some tax incentives relate to the APG’s broader goal of reducing greenhouse gas emissions. We have previously reported on tax incentives that may affect greenhouse gas emissions from cars and trucks, including those for plug-in electric-drive motor vehicles and for biodiesel fuel. However, the strategy that EPA developed for this goal focused on implementing greenhouse gas emissions standards for cars and trucks. For example, the APG’s performance measures included the number of tests EPA conducted to confirm the validity of manufacturers’ greenhouse gas emission test results. EPA officials explained that, therefore, tax incentives were not central to this APG’s progress and achievement. We continue to believe our prior recommendation that OMB ensure agencies identify relevant tax expenditures on Performance.gov has merit, and that OMB actions to fully address it would provide a more comprehensive assessment of how tax expenditures contribute to agency goals. Goal leaders we interviewed identified several common challenges in managing APGs. The most commonly-cited challenge was constrained resources, including those resulting from spending reductions under sequestration. A little more than one-third (16 of 46) of the goal leaders we interviewed cited resource constraints as a challenge in managing their goals. For example, the Department of the Interior’s (Interior) Assistant Secretary for Policy, Management and Budget, who was the goal leader for the Support Youth Employment APG, told us that sequestration had affected progress on her APG. The sequester cuts took effect in March of 2013, the same time that Interior bureaus were planning youth hiring for the summer. The goal leader told us that as a result, Interior bureaus’ youth hiring was lower than expected. This is consistent with our prior work, in which we found that Interior officials said that a hiring freeze instituted in response to sequestration had adversely affected the department’s ability to achieve this APG. According to information Interior reported on Performance.gov, the department’s youth hiring in fiscal year 2013 was nearly 20 percent lower than prior year levels, and it did not achieve this APG. Additionally, the Deputy Administrator for Defense Nuclear Nonproliferation, who was the goal leader for the Department of Energy’s Make Significant Progress Toward Securing the Most Vulnerable Nuclear Materials Worldwide within Four Years APG, told us that budget uncertainties complicate goal-setting. Although GPRAMA states that APGs should have ambitious targets, the goal leader told us that these uncertainties may provide an incentive for agencies to set goal targets lower than they otherwise would. Other commonly-cited challenges included difficulty identifying meaningful measures of goal progress and issues with data, such as problems with consistency and availability. Goal leaders we interviewed also identified common practices that they said were helpful in managing APGs. Some of these were related to GPRAMA requirements. For example, more than a quarter (14 of 46) of goal leaders identified practices related to measuring goal progress, such as assigning responsibility for meeting milestones, as helpful in managing APGs. GPRAMA states that APGs are to have clearly defined quarterly milestones and balanced performance measures for assessing goal progress. Goal leaders emphasized the importance of not only developing measures but developing ones that are meaningful and reliable indicators of goal progress. For example, the Chief Operating Officer at the Office of Personnel Management (OPM), who led the agency’s Ensure High Quality Federal Employees APG, said that she had worked to make sure the agency used appropriate and consistent measures to track goal progress. Other commonly-cited practices include several related to coordination across an agency or program, which OMB guidance specifies that goal leaders should have the authority to do. Goal leaders noted effective methods of coordination such as reaching out to agency field staff, employee unions, and non-federal entities. Goal leaders also identified practices that stem from the GPRAMA requirement that agencies review APG progress quarterly. As noted earlier, goal leaders cited benefits from quarterly performance review (QPR) meetings, which they said have promoted coordination, accountability, and attention to goal progress, and have provided opportunities to get feedback directly from agency leaders. Goal leaders reported that they share information on APG challenges and practices for managing their goals, although information sharing outside of their agencies is generally limited to officials with whom they are already working. They reported several examples of sharing information within their agencies, most frequently through agency meetings, such as QPRs. For example, the Director of the National Aeronautics and Space Administration’s (NASA) International Space Station Division, who is the goal leader for the agency’s Sustain Operations and Full Utilization of the International Space Station APG, told us that the agency’s baseline performance review meetings, through which the agency holds QPRs, have facilitated his sharing of lessons learned with another NASA goal leader whose APG also involves human space flight. Goal leaders also provided some examples of sharing information with officials outside of their agencies. In these cases, most examples involved sharing information with officials with whom they work on issues related to their APGs. These include members of interagency councils and committees focused on issues related to their APGs, and officials from agencies doing work related to the APG. For example, the Associate Administrator of the Small Business Administration’s Office of Government Contracting and Business Development, who is the goal leader for the agency’s Increase Small Business Participation in Government Contracting APG, told us that he shares information with officials from other agencies involved in small business contracting through interagency groups such as the White House Small Business Procurement Group. The Performance Improvement Council’s (PIC) duties, which are detailed in GPRAMA and OMB guidance, include facilitating among agencies the exchange of practices that have led to performance improvements, and developing tips, tools, training, and other capacity-building mechanisms to strengthen agency performance management and facilitate cross-agency learning and cooperation. As specified by GPRAMA, the PIC is chaired by OMB’s Deputy Director for Management and composed of the performance improvement officers (PIO) from the 24 Chief Financial Officers Act agencies, as well as any other PIOs and individuals identified by OMB. Our prior work found that the PIC holds two types of meetings— a “principals only” meeting open only to PIOs and a broader meeting open to PIOs and other agency staff. In addition, the PIC sponsors working groups focused on issues related to implementation of GPRAMA, such as internal agency reviews. The PIC also conducts government- wide training on specific topics, such as strategic planning. GAO-13-356. goal leader reported participating in the PIC as a member of several of its working groups. OMB and PIC staff explained that the PIC interacts with agency PIOs and deputy PIOs as their primary points of contact, so staff generally do not reach out directly to agency priority goal leaders. OMB and PIC staff said that they see the PIO as the key official in managing agency performance, and focus on PIOs and their staff because of the importance of equipping them with the capability to provide support within their agencies on a variety of issues. Additionally, they focus on the PIO rather than other officials to avoid undercutting the PIO’s relevance. In line with this, several goal leaders we interviewed noted that interactions with the PIC are handled by other offices within their agencies, such as performance offices. They rely on these staff to pass along relevant information. For example, the Chief of NASA’s Strategic Planning and Performance Management Branch explained that either she or the agency’s PIO attends PIC meetings, and then shares relevant information with goal leaders and others within the agency. NASA officials provided us with copies of a monthly newsletter that the agency uses to distribute information internally on performance management. The August 2013 newsletter included information on the latest version of OMB’s Circular A- 11, and on upcoming deadlines for activities related to the agency’s APGs, performance plans, and other products. Although the PIC has focused to date on working with PIOs to share information with agencies, OMB and PIC staff identified several examples of information sharing with goal leaders. OMB and PIC staff provided us with examples of products the PIC developed for goal leaders, including a guide to best practices for setting milestones, a priority goal setting guide, and a priority goal evaluation tool, designed to help agencies set APGs and drive discussion around them. Additionally, they said that they held meetings with agency PIOs shortly after the APGs for 2012 and 2013 were set, and that these meetings included goal leaders. Agenda items included a discussion of agencies’ performance management approaches and selected APGs. Additionally, the PIC provided us with copies of agendas from PIC meetings, including a January 2012 meeting at which there was a goal leader panel. A former goal leader of one of the APGs in our sample told us that she participated in the panel and provided a copy of her talking points, which focused on her experience working on previous agency goals and factors she identified for success. PIC staff also told us that they invited goal leaders to a recent meeting they held in February 2014 focused on implementing successful strategic reviews. Although the PIC has developed products and information aimed at goal leaders, it may be missing opportunities to facilitate greater information sharing among them. As described earlier in this report, goal leaders we interviewed have encountered common challenges in managing APGs, and also identified practices that may be useful to other goal leaders and deputy goal leaders. Additionally, goal leaders of APGs of similar program types may be interested in sharing information. For example, several of the APGs in our sample relied on program types such as grants, contracts, regulations, and research and development. Although goal leaders reported sharing information within their agencies and with outside officials working on issues related to their goals, they and their deputy goal leaders lacked the means to identify and share information with other goal leaders who are facing similar challenges or interested in similar topics. The deputy goal leader for DOT’s Advance the Development of Passenger Rail in the United States APG told us that he would find it useful to discuss common issues with others working on APGs, in particular related to performance measures. As highlighted earlier, our prior work has found that agencies have experienced common issues in measuring various types of programs, and recommended that the Director of OMB work with the PIC to develop a detailed approach to examine these difficulties, including identifying and sharing any promising Such an approach could also include direct outreach by OMB practices.and the PIC to goal leaders and deputy goal leaders. Senior agency officials’ commitment to and accountability for improving performance are important factors in determining the success of performance and management improvement initiatives. GPRAMA’s provision that agencies assign responsibility for achieving APGs—which reflect agencies’ highest priorities—to goal leaders is a powerful mechanism for promoting greater involvement and accountability in performance management. Although goal leaders we interviewed cited several positive effects of the goal leader designation and related GPRAMA requirements, there are areas where goal leader effectiveness could be improved. These lessons learned may also be relevant for leaders of other high-level goals, such as agency strategic objectives and government-wide cross-agency priority goals. First, a number of the goal leaders we interviewed did not have deputy goal leaders, although OMB guidance states that they should. OMB staff also stated that the deputy performs the important function of connecting goal strategy with goal implementation. Additionally, a little more than 40 percent of the APGs in our sample had changes in goal leaders between February 2012 and September 2013. This level of turnover may be higher as the current presidential administration nears an end and goal leaders who are political appointees leave their positions. Officially designating a deputy goal leader provides clear responsibility and accountability for goal achievement, and as shown already, deputies can help provide continuity during times of goal leader transition. Another missed opportunity in implementing the goal leader role is to fully utilize performance plans as an accountability mechanism for both goal leaders and their deputies. Performance plans are a tool for ensuring that officials are evaluated on and held accountable for defined outcomes, but the majority of the performance plans we reviewed did not fully reflect responsibility for APGs. Although other mechanisms, such as QPRs, also promote accountability, agencies that do not clearly link goal leader and deputy performance plans with APGs may be missing opportunities to ensure that goal leaders and deputies are held accountable for goal progress. Because APGs by definition reflect the highest priorities of each agency, accountability is especially important. Goal leaders also identified several common challenges and practices related to managing APGs, but may be missing opportunities to share this information with their peers across government. We found that they have shared this information to some extent within their agencies and with officials from outside agencies who are working on similar issues. However, they lacked a means through which to identify others facing similar challenges or interested in similar topics. One such missed opportunity concerns sharing information among goal leaders about how to measure the performance of similar types of programs, such as grants, that multiple agencies use to drive progress on their APGs. Similarly, our review indicates that there are different views among goal leaders and agencies on how to implement OMB’s requirement that they identify and make public information about tax expenditures that contribute to APGs. The PIC, which is charged with facilitating the exchange of information among agencies, could play a greater role in fostering information sharing on these issues and others among goal leaders and deputy goal leaders to help improve agency performance. To ensure goal leader and deputy goal leader accountability, we recommend that the Director of OMB work with agencies to take the following two actions: Appoint a deputy goal leader to support each agency priority goal leader. Ensure that agency priority goal leader and deputy goal leader performance plans demonstrate a clear connection with agency priority goals. To better promote the sharing of information among goal leaders and their deputies, we recommend that the Director of OMB work with the PIC to further involve agency priority goal leaders and their deputies in sharing information on common challenges and practices related to agency priority goal management. We provided a draft of this report to the Director of the Office of Management and Budget (OMB) and to the 24 agencies that developed agency priority goals (APG) for 2012 and 2013. A full list of these agencies is shown in appendix I. OMB staff provided us with oral comments and generally agreed with our findings, conclusions, and recommendations. OMB staff, as well as officials from the Department of Health and Human Services, National Aeronautics and Space Administration, and Office of Personnel Management (OPM) also provided technical comments, which we incorporated, as appropriate. On July 1, 2014 our liaison from the Department of Labor’s (DOL) Office of the Assistant Secretary for Policy e-mailed us a summary of DOL officials’ comments on the draft report. DOL officials disagreed with some of the findings, conclusions, and recommendations in our draft report. We discuss their specific comments and our evaluation of them below: DOL officials raised concerns about our characterization of the department’s quarterly performance reviews (QPR) for the Reduce Worker Fatalities APG, which is jointly led by two goal leaders, each from a different DOL component agency. DOL officials stated that our report implies that DOL’s practice of conducting separate reviews of safety and health agencies is a barrier to coordination of efforts to achieve the shared APG. In response, we made changes to the draft report to include the DOL officials’ views on the purposes of their separate reviews. Regardless of the formats of agencies’ performance reviews, however, both OMB guidance and our prior work emphasize the importance of including relevant contributors to APGs in these reviews. DOL officials stated that they believe our report gives the incorrect impressions that 1) organizing QPRs by components is a shortcoming, and 2) the sole purpose of QPRs is to manage APGs. As we previously explained, we do not take issue with DOL holding QPRs by component, and agree with DOL officials that this format is not a shortcoming. We also agree that QPRs may focus on broader issues than APGs. For both of these issues, what is important is that QPRs include relevant APG contributors. DOL officials commented on our discussion of including external parties in QPRs. They stated that they have other sufficient ways of collaborating with outside contributors. They do not believe it would be beneficial to include stakeholders with very specific concerns in a detailed policy and operations review of all agency component performance management issues in these reviews. In response, we clarified in the report that both OMB guidance and our prior work emphasized including relevant external goal contributors in these performance reviews. DOL officials stated that they consider figure 1, which lists goal leader competencies identified by OPM, to be misleading because it implies that the goal leader position is new. They suggested that we remove the figure. They referenced our finding that some officials were already serving in a similar role before becoming goal leaders. While this is true, the GPRA Modernization Act of 2010 established the goal leader position in law and assigned specific responsibilities to goal leaders. Figure 1 depicts the goal leader competencies described in OPM’s January 2012 memorandum for chief human capital officers. So, we retained this figure in the report. DOL officials raised concerns about whether the report includes sufficient evidence to support our conclusion that additional OMB and Performance Improvement Council (PIC) outreach to goal leaders and deputies would improve goal management. As we stated in the report, while we acknowledged that the PIC has already conducted some limited outreach to agency goal leaders, we found that goal leaders and their deputies could benefit from additional information sharing facilitated by the PIC. OMB staff agreed with our recommendation related to this finding, so we did not make any changes to the report to address this concern. Finally, DOL officials recommended removing or revising our recommendation that OMB ensure that agencies have deputy goal leaders in place because they felt it implies that the designation of a deputy drives goal achievement. Our recommendation is based on several factors, including OMB guidance requiring agencies to assign deputy goal leaders, and OMB staff’s view that deputies perform the important function of connecting APG leadership and strategy with implementation, as discussed in our report. Furthermore, OMB staff agreed with this recommendation, so we did not make any changes to the report to address this concern. The following agencies had no comments on the draft report: Department of Agriculture, Army Corps of Engineers – Civil Works, Department of Commerce, Department of Defense, Department of Education, Department of Energy, Environmental Protection Agency, General Services Administration, Department of Homeland Security, Department of Housing and Urban Development, Department of the Interior, Department of Justice, National Science Foundation, Small Business Administration, Social Security Administration, Department of State, Department of Transportation, Department of the Treasury, U.S. Agency for International Development, and the Department of Veterans Affairs. The written response from the Social Security Administration is reproduced in appendix III. We are sending copies of this report to the Director of OMB as well as appropriate congressional committees and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6806 or mihmj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Agency/ Goal Department of Agriculture: Assist rural communities build and maintain prosperity through increased agricultural exports. Army Corps of Engineers – Civil Works: Help facilitate commercial navigation by providing safe, reliable, highly cost-effective, and environmentally-sustainable waterborne transportation systems. Army Corps of Engineers – Civil Works: Improve the current operating performance and asset reliability of hydropower plants in support of Executive Order 13514. Department of Commerce: Advance commercialization of new technologies by reducing patent application pendency and backlog. James R. Hannon, Chief, Operations and Regulatory Community of Practice Teresa Rea, Acting Under Secretary of Commerce for Intellectual Property and Acting Director of the United States Patent and Trademark Office Department of Commerce: Expand broadband service to communities. Lawrence Strickling, Assistant Secretary for Department of Defense (DOD): Reform DOD’s acquisition process. Department of Defense: Improve energy performance. Department of Education: Improve outcomes for all children from birth through third grade. Department of Education: Demonstrate progress in turning around the nation’s lowest-performing schools. Department of Education: Improve students’ ability to afford and complete college. Department of Energy: Make significant progress toward securing the most vulnerable nuclear materials worldwide within 4 years. Department of Energy: Reduce the cost of batteries for electric drive vehicles to help increase the market for plug-in hybrids and all-electric vehicles and thereby reduce petroleum use and greenhouse gas emissions. Department of Energy: Make solar energy as cheap as traditional sources of electricity. Agency/ Goal Department of Energy: Prioritization of scientific facilities to ensure optimal benefit from federal investments. Environmental Protection Agency: Clean up contaminated sites and make them ready for use. Environmental Protection Agency: Reduce greenhouse gas emissions from cars and trucks. General Services Administration: Manage customer agency real estate portfolio needs in a cost-effective and environmentally sustainable manner. Department of Health and Human Services: Reduce foodborne illness in the population. Department of Health and Human Services: Increase the number of health centers certified as patient centered medical homes. Department of Health and Human Services: Improve health care through meaningful use of health information technology. Department of Homeland Security: Improve the efficiency of the process to detain and remove criminal aliens from the United States. Department of Housing and Urban Development: Reducing homelessness. Position vacant at the time of our interviews (we met instead with the performance improvement officer and deputy performance improvement officer) Department of the Interior: Build the next generation of conservation and community leaders by supporting youth employment at the Department of the Interior. Department of the Interior: Reduce violent crime in Indian communities. Department of the Interior: Enable capability to increase the available water supply in the western states through conservation related programs to ensure adequate and safe water supplies. Department of Justice: Protect those most in need of help - with special emphasis on child exploitation and civil rights. Department of Labor: Create a model safety and return-to-work program. Department of Labor: Reduce worker fatalities. National Aeronautics and Space Administration: Sustain operations and full utilization of the International Space Station. National Aeronautics and Space Administration: Develop the nation’s next generation human space flight system to allow for travel beyond low earth orbit. National Science Foundation (NSF): Increase opportunities for research and education through public access to high-value digital products of NSF-funded research. Office of Personnel Management: Reduce federal retirement processing time. Office of Personnel Management: Ensure high quality federal employees. Kenneth Zawodny, Jr., Associate Director, Retirement Services Angela Bailey, Chief Operating Officer (note: at the time of our interview, Ms. Bailey’s title was Associate Director of Employee Services) Small Business Administration: Increase small business participation in government contracting. Small Business Administration: Process disaster assistance applications efficiently. Social Security Administration: Reduce Supplemental Security Income overpayments. Department of State/U.S. Agency for International Development: Democracy, human rights, and good governance. Department of Transportation: Advance the development of passenger rail in the United States. Department of the Treasury: Increase electronic transactions with the public to improve service, prevent fraud, and reduce costs. Department of Veterans Affairs: Assist in housing 24,400 (12,200 per year) additional homeless veterans and reduce the number of homeless veterans to 35,000 in 2013, to be measured in the January 2014 point-in- time homelessness count. The GPRA Modernization Act of 2010 (GPRAMA) requires GAO to review the act’s implementation. This report is part of a series of reviews planned around the requirement. The objectives of this report are to: (1) evaluate the roles and responsibilities of agency priority goal leaders in managing goal progress and the extent to which they are held accountable for achievement of priority goals; (2) review the extent to which priority goal leaders collaborate with other programs and agencies that contribute to the achievement of the priority goals; and (3) describe any challenges and practices identified by priority goal leaders in managing goals, and evaluate the extent to which they exchange this information with other priority goal leaders. To achieve our objectives, we focused our review on the goal leaders for a random sample of agency priority goals (APG) for 2012 and 2013. There were 103 APGs for 2012 and 2013, across 24 agencies. The number of APGs per agency ranged from two to eight. The sample we selected included nearly half (47) of these APGs. We chose our sample to ensure that it included at least one goal from each of the 24 agencies and approximately half of the total number of APGs per agency. Although our sample represented a significant portion of APGs and goal leaders, we did not generalize information to the population of APGs or goal leaders. Appendix I includes a list of the APGs in our sample and the associated goal leaders we interviewed. To inform our work on all three objectives, we reviewed GPRAMA and related Office of Management and Budget (OMB) guidance, along with our prior work on performance management roles, APGs, and interagency collaboration. We also reviewed information on APGs and goal leaders published on Performance.gov, a government-wide performance website. We used information from Performance.gov throughout the engagement, but all references to data from Performance.gov in this report are as of May 23, 2014, the date we most recently downloaded information from the website. To assess the reliability of APG information available through Performance.gov, we collected information from agencies and reviewed relevant documentation and our prior work. We concluded that information from the website was sufficiently reliable for the purpose of drawing our sample of APGs and providing contextual information on APGs. We did not evaluate agency data on goal progress to determine if APG progress they described to us and on Performance.gov was accurate. But, we did ask agency officials to verify information on goal progress we report from Performance.gov. We conducted semistructured interviews with the goal leaders for 43 of the 47 APGs in our sample, for a total of 46 goal leaders. The number of goal leaders does not equal the number of goals because some goals had more than one leader, while some goal leaders were responsible for more than one goal in our sample. In most cases where there was more than one goal leader, we interviewed all goal leaders. The one exception was for a goal for which the agency had identified two goal leaders, but noted that one was the primary goal leader. In that case, we only interviewed that official. The goal leaders for the other four APGs in our sample had either left their agencies or were about to leave at the time of our interviews. For these goals, we interviewed other agency officials who were knowledgeable about the goals, such as deputy goal leaders and performance management staff. Two of the departing goal leaders provided us with written responses to our questions. We also interviewed Performance Improvement Council (PIC) and OMB staff. To address our first objective, we obtained and analyzed documentation from goal leaders related to their roles and responsibilities, such as records showing how they track and communicate goal progress. We also obtained individual performance plans from all goal leaders and deputy goal leaders who had relevant plans—32 goal leaders and 35 deputy goal leaders—and analyzed them to understand how they are used to hold officials accountable for goal progress. We focused our analysis on how closely expectations in the plans were aligned to the APG for which the officials were responsible. Specifically, we evaluated (1) whether the plans specify that officials are responsible for the APG; (2) whether performance standards are linked to APG outcomes; (3) if the plans include broad responsibilities for an office or mission area under which the APG is likely to fall; and (4) if they hold officials responsible for one or more activities that could contribute to progress on the APG. To assess goal leader and deputy goal leader performance plans, we reviewed our prior work on individual and organizational performance. We also reviewed our prior work on data-driven performance reviews (also referred to as quarterly performance reviews) as part of our examination of accountability mechanisms for goal achievement and goal leaders’ collaboration. We also reviewed Office of Personnel Management (OPM) guidance and regulations on performance management, although we recognize that not all of the performance plans we reviewed were within the coverage of OPM guidance and regulations. To further address this objective, we also included questions in our interviews with goal leaders about their roles and responsibilities, deputy goal leader roles and responsibilities, accountability for goal achievement, and their assessments of the effects of the goal leader designation. We also included related questions in our interview with OMB and PIC staff. To address our second objective, we obtained and analyzed documentation from goal leaders related to collaboration, such as minutes and agenda of meetings during 2012 and 2013 at which APGs were discussed, and records of agency analysis of different program types that contribute to APGs, such as grants. To determine how agencies are identifying and analyzing the contribution of tax expenditures to their APGs, we identified five APGs in our sample that For this subset of APGs, have a close connection to tax expenditures.we included questions during our interviews with goal leaders and their staff about agencies’ consideration of tax expenditures. We also asked goal leaders about how they coordinate within and outside of their agencies, and how they identify and analyze the contributions of different program types. We asked related questions in our interview with OMB and PIC staff. To address our third objective, we obtained and analyzed examples of ways in which goal leaders shared information, such as talking points used at presentations and copies of e-mail between agencies and OMB and the PIC. We included questions in our interviews with goal leaders about what they consider to be promising practices and lessons learned in managing APGs, and the extent to which they and deputy goal leaders share this type of information within and outside of their agencies. We also asked OMB and PIC staff about actions the PIC has taken to reach out to and facilitate information exchange among goal leaders. We conducted our work from June 2013 to July 2014 in accordance with generally accepted government audit standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Sarah Veale, Assistant Director, and Kathleen Padulchick, Analyst-in-Charge, supervised the development of this report. Jenny Chanley, Karin Fangman, Erik Kjeldgaard, Michael O’Neill, and Cynthia Saunders made significant contributions to all aspects of this report. | Leadership involvement and accountability are important factors driving successful performance improvement in government. GPRAMA established the role of the agency priority goal leader and assigned accountability for achieving APGs to these officials. This report is one of a series in which GAO, as required by GPRAMA, reviewed the act's implementation. It assesses (1) the roles and responsibilities of agency priority goal leaders in managing goal progress and the extent to which they are held accountable for goal achievement; (2) the extent to which goal leaders collaborate with other programs and agencies that contribute to APG achievement; and (3) any challenges and practices identified by goal leaders, and the extent to which they exchange this information with their peers. To address all three objectives, GAO examined nearly half (47 of 103) of the APGs for 2012 and 2013, and analyzed relevant documentation. GAO also interviewed the goal leaders and other relevant officials for each of the 47 selected goals. Agency priority goal leaders GAO interviewed were generally highly-placed within their agencies—for example, several were heads of agencies—and reported a range of responsibilities related to managing agency priority goals (APG), such as laying out goal strategies. A majority of the goal leaders said the goal leader designation had benefits for their APGs, such as greater visibility for the goal. Several also believed that there were benefits to designating the goal leader position in conjunction with other requirements from the GPRA Modernization Act of 2010 (GPRAMA), such as reviewing priority goal progress at least quarterly. The Office of Management and Budget (OMB) directs agencies to appoint deputy goal leaders. Deputy goal leaders manage day-to-day implementation of APGs and provide continuity in the event of goal leader turnover. From the time the APGs were published in February 2012 to the end of fiscal year 2013 (when they were to have been achieved), about 40 percent of the APGs GAO examined had a change in goal leader, while about 30 percent had a change in the deputy position. In addition, although most of the 46 goal leaders GAO interviewed had formal deputy goal leaders in place, 11 (24 percent) did not. Without a designated deputy goal leader, agencies lack a formally designated official to fill a key role in goal implementation. Individual performance plans are one of several mechanisms to provide goal leader and deputy goal leader accountability for APGs. Most goal leaders and all deputy goal leaders had performance plans. These plans covered a range of responsibilities, but generally did not fully reflect their APGs. In fact, many did not refer to the APG. Performance plans that link more directly to APGs could help ensure that officials are evaluated on and held responsible for APG progress and outcomes. Goal leaders collaborated with officials from outside their agencies to drive progress on APGs. However, some goal leaders reported that these outside contributors were not included in the quarterly performance reviews. Goal leaders also reported that a variety of different types of programs, such as grant and regulatory programs, contributes to their APGs. However, they reported few mechanisms for sharing information with other agencies related to assessing these programs. Further, for a variety of reasons agencies have focused less attention on identifying the tax expenditures that contribute to their APGs. These findings are consistent with prior recommendations GAO made to OMB regarding GPRAMA implementation. OMB has taken some steps to address the recommendations. Goal leaders identified some common challenges and practices in managing APGs, but shared this information to a limited extent. For example, goal leaders commonly cited resource constraints as a challenge, and practices related to measuring goal progress as helpful. One of the roles of the Performance Improvement Council (PIC), a council made up of agency performance improvement officers and chaired by OMB, is to facilitate information exchange. The PIC has shared tools and information with goal leaders; however PIC staff's primary points of contact are agencies' performance improvement officers and their deputies. Overall, goal leaders and their deputies have had little direct interaction with the PIC. More direct outreach from PIC staff could facilitate information sharing among goal leaders and their deputies, and help ensure that they do not miss opportunities to better manage their APGs. GAO recommends that OMB work with agencies to (1) ensure that they appoint deputy goal leaders; and (2) more clearly link goal leaders' and deputies' performance plans to APGs, and work with the PIC to further involve goal leaders and deputies in information-sharing related to APGs. OMB staff generally agreed with GAO's recommendations. |
The Hazardous and Solid Waste Amendments of 1984 revised RCRA to include new provisions requiring certain facilities to take corrective action to clean up their sites. EPA data show that as of the end of fiscal year 2010, about 6,000 facilities were subject to corrective action; that is, they were required to undertake corrective action in response to a release of hazardous waste or constituents. Facilities that may be required to undertake corrective action include, among others, operating or closed treatment, storage, or disposal facilities that are permitted or have interim status—during which the owner or operator of a treatment, storage, or disposal facility is considered to have been issued a RCRA permit even though a final determination on the permit has not yet been made by the regulator. Permitted and interim-status facilities generally incur an obligation for continued corrective action even after closure. Facilities generally come into the corrective action program when (1) EPA or an authorized state is considering a facility’s RCRA permit application, (2) a release of hazardous waste or constituent has been identified, or (3) a facility volunteers to perform corrective action by entering into an agreement with EPA or an authorized state. First, when a facility is seeking a permit or when a permit is already in place, EPA or an authorized state can incorporate corrective action into the permit’s requirements. EPA or the state may use this process to address both on- site releases and releases that have migrated beyond a facility’s boundary. Second, EPA or the state may issue a corrective action order that is not contingent on a facility’s permit status, for example, when immediate action is necessary to address a release or threat of release of a solid or hazardous waste that may present an imminent and substantial endangerment to human health or the environment, including at an interim-status facility. Third, facilities may volunteer to take corrective action before they are required to do so by the terms of the permit or corrective action order. There are no comprehensive cleanup regulations under RCRA. Instead, EPA and authorized states primarily use guidance to implement corrective action and impose requirements at individual facilities through their permits or orders. The agency emphasizes the flexible nature of the program, but several elements are common to most, although not all, corrective action cleanups: Initial facility assessment. EPA or an authorized state first assesses a facility to characterize the risk posed and determine the need for immediate action. Facility investigation. If it is determined that information beyond initial facility assessment is needed, EPA or the authorized state requires the company owning or operating the facility to conduct a more detailed investigation to establish the nature and extent of contamination released to groundwater, surface water, air, and soil. Depending on a facility’s particular circumstances, this phase may be complex and take years to complete. The process is monitored by the agency overseeing the correction action, and the outcome is subject to that authority’s approval. While facility investigation is under way, interim measures may be needed to control or abate ongoing risks to human health and the environment. According to EPA, interim measures may take place any time during the corrective action process. In some cases, such actions may be enough to complete the corrective action process. Remedy study and selection. If further corrective action is deemed necessary, facility owners and operators analyze a range of cleanup options. A company may complete a study of corrective measures describing the advantages, disadvantages, and costs of various options. The scope of the effort required for such a study depends on the risks posed at the facility: a study can be relatively restricted in scope if the risks and cleanup option are readily identifiable. EPA or the authorized state solicits public comments on the selected option and approves a final cleanup method. Remedy construction and implementation. Facility owners and operators design and construct and, as necessary, operate, maintain, and monitor the selected remedy. EPA has undertaken a number of initiatives over the years to manage the corrective action program by making decisions on the basis of the level of the risk to public health and the environment and to improve the cleanup process. In 1991, EPA decided to focus its resources on facilities it ranked as high priority for corrective action because of the relatively high risk they posed. It also decided to first control or abate immediate threats to human health and the environment at these facilities, instead of diverting resources to push for final cleanup actions. In 1994, EPA established two environmental indicators: controlling exposures to humans and controlling the migration of contaminated groundwater. In 1998, EPA took steps to remove some barriers to cleanups, such as providing for more flexible treatment of contaminated soil that may temporarily accumulate during cleanups. In 1999 and 2001, EPA implemented a set of administrative reforms to promote faster and more flexible cleanup. These reforms called for new results-oriented cleanup guidance, promoting program flexibility through training and outreach, and enhancing community involvement. As part of its effort to focus and streamline the RCRA corrective action program, EPA has since 1997 set a series of progressively more ambitious performance goals and identified which facilities must meet them. The agency also issued guidance to expedite cleanup. Goals set by EPA for the corrective action program have encompassed progressively more facilities and longer time frames. In response to GPRA, the agency first set performance goals to be achieved by fiscal year 2005, which focused on high-risk facilities deemed to have potentially unacceptable levels of contaminants. EPA then began focusing on longer-term concerns by setting goals to be achieved by fiscal year 2008. EPA also began to establish a long-range vision for the program, which included a larger universe of facilities and goals for fiscal year 2020. In addition, the agency has also issued guidance to expedite cleanup. In 1997, in response to GPRA, EPA established its first set of performance goals for the corrective action program. The goals were to be achieved by the end of fiscal year 2005 and targeted 1,714 facilities at high risk of causing potentially unacceptable public exposure to pollutants, having high levels of groundwater contamination, or both. The performance goals to be met by fiscal 2005 were as follows: controlling human exposures to contaminants at 95 percent of these high-priority facilities and controlling the migration of contaminated groundwater at 70 percent of the 1,714 facilities. Importantly, these goals did not explicitly address final cleanup of sites but rather sought to control contamination at high-risk sites first. Previously, we and others reported that EPA had not established long- term goals for final cleanup. In our August 2000 report, we noted that focusing only on controlling contamination and not on implementing final cleanup actions could postpone cleanups well into the future, and we recommended that EPA establish long-term and annual goals delineating the number or portion of facilities that are to implement final cleanup actions. In response to our recommendation, EPA agreed that implementing final remedies was important but decided at the time to use its limited resources to focus on controlling contamination at the worst sites. EPA also stated that the corrective action program did not have the resources to focus concurrently on containing contamination and implementing final cleanup actions. EPA’s next set of performance goals, for 2008, began to address longer- term concerns. EPA continued the goals to control human exposures to contaminants and contain the spread of groundwater contamination, but it added two new goals. These two longer-term goals directed that a portion of high-priority facilities were to decide upon and construct a final cleanup remedy. EPA defined the completion of final remedy construction as the time when the physical components of a final corrective action remedy for a facility were in place and functioning correctly. EPA also increased the total number of high-priority facilities that must address the goals from 1,714 to 1,968. The performance goals to be met by fiscal year 2008 were as follows: controlling human exposures to contaminants at 95 percent of 1,968 high-priority facilities, controlling the migration of contaminated groundwater at 80 percent of these high-priority facilities, selecting final remedies at 30 percent of these facilities, and completing final remedy construction at 20 percent of these facilities. While directing attention to high-priority facilities, EPA was also working to establish what it considered a long-range vision for the corrective action program—that by the year 2020, cleanup of contamination at an expanded universe of RCRA facilities would be largely complete. In developing this vision, EPA issued a memorandum asking the regions and authorized states to include in this universe facilities that, as of October 1997, had RCRA permits for actively managing waste, as well as treatment and storage facilities that had been closed and had postclosure obligations. The regions and states had discretion to add facilities they agreed were important to address through the program. According to an EPA official, this group of facilities includes the majority of facilities ultimately expected to need corrective action, including those that had previously been considered as medium or low priority. Beginning in fiscal year 2009, EPA shifted its focus from the 1,968 high- priority facilities to tracking and reporting progress among the expanded universe of 3,747 facilities targeted in the 2020 goals. In September 2010, EPA issued new fiscal year 2015 performance goals for this expanded universe. These goals were as follows: controlling human exposures to contaminants at 84 percent of the 3,747 facilities, containing migration of contaminated groundwater at 78 percent of these facilities, and completing final remedy construction at 56 percent of these facilities. The agency also set long-range goals for 2020: controlling human exposures to contaminants at 95 percent of 3,747 facilities, controlling the migration of contaminated groundwater at 95 percent of these facilities, and completing final remedy construction at 95 percent of these facilities. Figure 1 depicts the goals set for the corrective action program for fiscal years 2005, 2008, 2015, and 2020. (App. III contains a map illustrating the number of facilities covered by the fiscal year 2020 goals in each EPA region and each state.) Notably, EPA’s fiscal year 2020 goals are for the final construction of remedies, which is something short of the ultimate goal of final completion of corrective action. According to EPA guidance, for corrective action to be complete, a facility must have constructed all required remedies and met the relevant specific cleanup objectives. For some facilities, such as those working to clean up contaminated groundwater, it can take years— perhaps decades—of operation before a site meets final cleanup standards. To date, EPA has not explicitly articulated such an ultimate cleanup goal. EPA headquarters officials told us that they may consider adding an explicit completion goal as the program progresses. EPA has established a formal process for its regions and authorized states to follow to determine whether facilities undergoing cleanup have controlled human exposures to contaminants and the migration of contaminated groundwater. An EPA document outlining the process calls for a facility’s lead regulator (an EPA or state official) to evaluate the site, using a standard assessment tool, to determine if these goals have been met. Both the individual completing the evaluation and that person’s supervisor must sign off on the evaluation and provide supporting documentation for their determination. The resulting determination represents the status of the facility. If conditions change for a facility deemed to have achieved its performance goals (for example, if contamination is no longer under control), the decision can be reversed in EPA’s records. The major aim of this process is to measure the progress facilities have made and determine whether a facility poses an unacceptable risk of human exposures to contaminants or migration of contaminated groundwater. Regarding the risk of human exposures, EPA documents direct regulators to evaluate various pathways, such as air or migrating groundwater, by which humans could be exposed to contamination and determine whether controls are in place to prevent unacceptable exposures given present uses of the land and groundwater. To meet the goal of controlling unacceptable human exposures, a facility may have to institute controls such as posting signs, constructing fences, or providing residents with alternative drinking water sources. In addition, according to EPA documents, to meet the goal of controlling the migration of contaminated groundwater, contaminants within groundwater must be contained, and monitoring must be done to confirm that contaminated groundwater remains in place. In addition, the groundwater contamination must not significantly affect the quality of streams, rivers, and other surface waters. To accomplish this goal, typical actions a facility might have to take include installing groundwater systems to treat or hydraulically contain contaminated groundwater, removing contaminated soil, or capping contaminated areas. Cleanup is not necessarily complete after meeting the goal of controlling groundwater migration, however. More-permanent remedies (or more detailed site investigation) are often needed to ensure the site is safe for reasonably anticipated future uses. As part of longer- term site cleanup, EPA would put these remedies in place. EPA’s documented process for determining whether a facility has achieved the performance goal of remedy construction is less formal. According to this process, EPA and the states must affirm this achievement in a letter to the facility or in a memorandum to the file, acknowledging that all physical construction of the last corrective remedy has been completed and that all the remedies are fully functional. In some instances, EPA considers facilities to have completed remedy construction even if no remedy has been constructed. According to its documented process, EPA may make such a determination in cases where (1) an investigation of the facility was conducted and no remedy was needed or (2) no additional construction was needed beyond the interim measures the facility implemented to control contamination. EPA has also established criteria for determining when cleanup is to be documented as complete. For example, if a facility has completed construction and the facility cleanup objectives have been met, the lead regulator can make a determination that the facility has achieved protection of human health and the environment. EPA guidance recommends that such a determination be reflected in a permit modification and include procedures for public involvement. In addition to clarifying when facilities may be deemed to have met performance goals, EPA has also issued a number of guidance documents to help streamline the corrective action process, maximize program flexibility, and expedite cleanup. Key documents include guidance on results-based cleanup approaches and tailored oversight, groundwater remediation, and enforcement strategies and financial responsibilities. Specifically: January 2001 guidance on enforcement strategies to encourage timely cleanup. This enforcement guidance describes several actions regulators could consider during corrective action permitting and negotiation, including using flexible rather than fixed compliance schedules to determine when or if facilities should be penalized for missing deadlines. The guidance also describes more collaborative approaches, with reduced agency oversight, for facilities with good compliance histories and the capacity to complete necessary corrective actions. September 2003 guidance outlining ways that regulators can change their processes to emphasize results and outcomes. This guidance outlines several core approaches for consideration at all corrective action facilities to move oversight and cleanup activities away from a one-size-fits-all approach to one that is site-specific, based on actual site risk, and procedurally flexible. For example, the lead regulator responsible for a site would develop an oversight plan for the corrective action process based on facility-specific conditions, such as site complexity, compliance history, and the facility’s financial and technical capability. In addition, the guidance encourages facilities to use innovative technologies and to focus first on areas representing the greatest short-term threats to human health or the environment. To increase cleanup efficiency at a site with multiple contamination sources, facility owners or operators would first address immediate risks to human health and the environment posed by the site as a whole and then address other short- and long-term cleanup objectives. The guidance recommends that regulators and facilities focus on achieving environmental results, rather than follow a predetermined set of cleanup steps that may not reflect site-specific circumstances. The guidance states that such results-based approaches to corrective action can achieve environmental results faster and potentially save resources for both the facility and regulatory agencies. April 2004 guidance for remediating groundwater contamination. In its own words, the guidance serves as a “plain language” consolidation of previous EPA policies on groundwater cleanup, aiming to provide regulators, facilities, and the public with greater clarity, certainty, and understanding of EPA’s policies and expectations regarding the cleanup of contaminated groundwater. The guidance promotes a results-based approach, recommending that facilities address immediate threats before moving on to intermediate and longer-term issues. It outlines EPA’s expectation that, where practicable, final cleanups will return usable groundwater to its “maximum beneficial use” (e.g., for drinking water, industrial use, or agriculture) within a time frame that is reasonable. This maximum beneficial use determines the levels to which a site’s groundwater should be cleaned up, whether to drinking water standards or potentially less stringent levels, which may be appropriate for industrial use. The guidance also notes that EPA’s policy recognizes that it may in some cases be technically impracticable to achieve certain groundwater cleanup levels. Importantly, the guidance notes that EPA policy also recognizes that states are the primary implementers of the program and that facilities may therefore need to follow the states’ groundwater requirements, which may be stricter. April 2010 memorandum on enforcement strategy. This memorandum outlines a national enforcement strategy to assist the regions and states in achieving the 2020 goals. This strategy provides direction for identifying and ranking facilities that warrant enforcement and clarifies a number of enforcement issues that regions and states should consider during the various steps of the correction action process. In addition to enforcement-related guidance, EPA also implemented an initiative to improve compliance with financial assurance requirements to ensure that funds are available for cleanup. According to EPA officials, to increase EPA and state officials’ knowledge and skills on financial assurance matters, EPA held training sessions, developed fact sheets and cost estimation software, held monthly conference calls, and took other education actions. These officials also stated that the agency hired a contractor to assess financial assurances obtained from numerous facilities, which helped to identify violations, as well as areas where more training was needed. EPA, states, and facilities have taken a variety of actions to streamline the cleanup process, and the vast majority of high-priority facilities have made considerable progress in meeting EPA’s performance goals to control contamination. But EPA’s longer-term 2020 goal of actually constructing final remedies to clean up contamination—a goal that applies to a much larger universe of high-, medium-, and low-priority facilities— may be difficult for the agency, states, and facilities to meet. According to the EPA and state officials we spoke with, EPA’s 2020 corrective action goals have helped motivate regulators and facilities to address cleanups. Each of the five EPA regional offices we visited has developed a strategy for achieving cleanup goals at the facilities within its jurisdiction that are subject to the 2020 goals. The strategies generally include clarifying the status of the region’s program, projecting remaining workloads, and identifying actions the region plans to take to meet the 2020 goals. In addition to articulating these long-term strategies, regional officials also told us that, through the agency’s annual planning process, they develop annual targets for the states to achieve each year. The regions build these targets into states’ work plans accompanying their grant agreements. These work plans specify activities that states are to perform in their corrective action programs and form the basis of midyear and end-of-year regional reviews. Regional officials also told us that they routinely discuss with responsible state regulators facilities’ progress and projections, and they hold training classes and other meetings to promote best practices. EPA officials in several regions also reported assisting states with facilities. In some cases, the regions have taken over the oversight of sites with unusually complex circumstances at the state’s request. Regional officials also told us of taking direct responsibility for completing assessments of the extent to which particular facilities have controlled human exposures to contaminants and the migration of contaminated groundwater. For example, EPA’s Dallas regional office reported reviewing technical documents and conducting site inspections at 47 facilities in Texas to verify that corrective action goals were met, and the Chicago office reported a number of assessments in Michigan. Regional and state officials also cited examples of regional technical assistance, such as regional support in sampling and analysis and groundwater surveys or modeling at distressed or bankrupt facilities. Officials from the regions and states we visited also reported taking steps to streamline corrective action procedures to help expedite cleanup. For example, in lieu of the conventional sequence of procedural steps, the EPA Dallas regional office developed a strategy that involved the development and use of performance standards and facility-specific risk management plans for a more results-based approach, which was adopted by several of its states. Officials in several regional and state offices told us they had eliminated the “corrective measures study,” which requires an evaluation of different cleanup alternatives. Officials in one region explained that this study often took too long and had neither a focus nor a remedy envisioned and that with the maturing of the corrective action program, federal and state regulators and the facilities with more knowledge about successful remedies can better target their efforts toward these remedies. In the same vein, a Georgia official explained that instead of studying every option for cleanup, facilities may now submit a proposal. The state may ask a facility to consider other alternatives if the proposal does not look appropriate or is not likely to be implemented within a reasonable time frame. Several of EPA’s Chicago regional officials told us they have successfully used streamlined enforcement orders for a number of facilities. According to the officials, the orders allow facility owners to investigate their sites and perform cleanup activities with fewer prescriptive instructions from the regulators. Reporting requirements during the investigation phase have also been streamlined to reduce the time needed to produce and review paperwork. The officials maintained that this more flexible approach has in some cases allowed them to cut substantial time off what would otherwise be needed to clean up some sites. Along similar lines, Philadelphia regional officials cited more than 50 “facility-lead agreements” with lower-risk facilities. Under these agreements, instead of relying on a more time-consuming enforcement order, the regional office and the facility sign a nonenforceable letter of commitment to implement a specified corrective action and use broad performance standards to guide facility activities. In addition to these streamlining initiatives, state officials also cited a number of other actions they believe encourage faster cleanups. For example, Louisiana has promulgated regulations that allow a tiered approach for setting minimum cleanup levels for soils and groundwater. Under the program, a facility may begin with stringent screening standards and progress through up to three levels of risk-based cleanup standards that are increasingly tailored to the specific conditions at the site. As a safeguard, however, before the facility can apply the tailored cleanup levels, it must conduct extensive site assessment and investigation work. State officials believe this program has helped address past situations where facilities and regulators reached impasses over facilities’ risk assessments and that it has allowed facilities to work toward cleanup levels more applicable to a given situation while still achieving environmental goals. Louisiana officials also adopted a program developed by the EPA Dallas region to encourage reuse of land at cleaned sites. The state regulator reviews a site to determine if investigation and cleanup efforts have confirmed or produced environmental conditions sufficiently protective for redevelopment or revitalization under current or planned land uses (e.g., residential, industrial, agricultural). Under the program, state and EPA officials provide the facility a letter summarizing the site’s condition, on- site work performed to investigate and address risks, and a determination that the site is ready for reuse. The determination can apply to the entire site or just a portion. Both EPA regional and state officials said that the determinations encourage faster investigations and cleanups, as well as encourage redevelopment by helping sites posing little environmental risk to avoid the stigma of historical contamination. Officials from other states also provided examples of actions they believe are encouraging faster cleanups. New Mexico officials said the state achieved better results at its federal facilities by issuing consent orders with detailed action steps and schedules. To encourage faster cleanups, Georgia shortened the timetable for the selection of cleanup remedies by its facilities, requesting that they select cleanup remedies by 2012. The state also has an internal goal for its facilities to complete corrective action by 2020. State officials explained that Georgia’s program may be ahead of some states’ because it was one of the first ones authorized, and many facilities have therefore been implementing corrective action measures under Georgia’s policies since the late 1980s. EPA data show that facilities surpassed EPA’s 2005 and 2008 performance goals seeking to stabilize the highest-priority sites by controlling human exposures to contaminants and the migration of contaminated groundwater (see fig. 2). By the end of fiscal year 2005, 96 percent of the 1,714 facilities designated at that time as high priority had controlled human exposures to contaminants, and 78 percent had controlled the migration of contaminated groundwater. By the end of fiscal year 2008, 96 percent of the 1,968 facilities designated at that time as high priority had controlled human exposures to contaminants, and the percentage controlling the migration of groundwater contamination had risen to 83 percent. Also by the end of fiscal year 2008, regulators and facilities had selected final remedies at 43 percent of these facilities and completed remedy construction at 35 percent of them. Beginning in fiscal year 2009, EPA began to measure the extent to which its expanded universe of 3,747 facilities was meeting the performance goals of controlling human exposures to contaminants, containing migration of contaminated groundwater, and constructing final cleanup remedies. EPA regional and state officials explained to us that in fiscal years 2009 and 2010, regulators and facilities continued to pursue cleanup remedies at high-priority facilities, which had long been working toward cleanup. The EPA regions and states also began to evaluate the extent to which the low- and medium-priority facilities added to the workload in 2009 had controlled contamination or constructed remedies to achieve the 2020 goal. As shown in figure 3, 2,712 facilities (72 percent) have controlled human exposures to contaminants, 2,357 facilities (63 percent) have controlled the migration of contaminated groundwater, and 1,396 facilities (37 percent) have constructed final cleanup remedies. Figure 4 shows progress made by the 3,747 facilities covered by the 2020 goals in carrying out major milestones in the corrective action process— including facility investigation, remedy selection, and remedy construction—plus cleanup not yet started and cleanup completed. As the figure shows, by the end of fiscal year 2010, 283 (8 percent) of the 3,747 facilities had not yet begun the cleanup process. Some of these facilities may have been assessed by EPA or the state and assigned a high, medium, or low priority, but no further action had been taken. The facilities in this category are also the ones most recently added to the universe for the 2020 goals. Our analysis of EPA data found that 968 of the 3,747 facilities (26 percent) at the facility investigation and contamination control stage are completing, or have completed, a thorough investigation of the types and extent of on-site contamination. These facilities have already controlled human exposures to contaminants or controlled the migration of contaminated groundwater but not both. The majority of facilities in this category are medium- and low-priority facilities. But a small number of high-priority facilities still fall into this category and have been unable to contain contamination at their sites despite numerous years as a high- priority facility. One such facility we examined is a small wood treatment operation in Georgia that has been investigating its groundwater contamination for years. According to state officials, under the terms of a 2005 consent order, the facility is required to collect more on-site groundwater contamination data and install remedies by 2011. Georgia officials told us that progress on this site is slow because the facility has been struggling to pay for corrective action work. At 828 of the 3,747 facilities (22 percent), steps have been taken and both human exposures to contaminants and the migration of contaminated groundwater have been controlled. Nevertheless, many of these facilities may need to take additional corrective steps to complete their cleanups. These facilities may still be investigating their sites and studying various remedies. They may have completed some remedy construction but may have additional work to do. Some may also have implemented remedies but are awaiting longer-term results to determine if the steps taken can serve as a final remedy. Two hundred nineteen of the 3,747 facilities (6 percent) have selected final cleanup remedies for all problems at their sites but have not yet completed remedy construction. Eight hundred twenty-six (22 percent) have completed construction of all remedies but have yet to qualify as having completed cleanup. At these facilities, the selected remedies are working, but specific cleanup standards have not yet been met. Six hundred twenty-three of the 3,747 facilities (17 percent) have achieved complete cleanup. The majority of these facilities have been medium- and low-priority facilities. High-priority facilities often have complex groundwater contamination problems and are typically more difficult to remediate; as a result, less than one-third of the facilities that had achieved complete cleanup are high-priority. Given EPA’s progress to date in meeting its goals and the progress it needs to make to meet them, it will be difficult to meet the goal of constructing final remedies at not only high-priority facilities but also at the medium- and low-priority facilities included in EPA’s expanded universe covered by the 2020 goals. To date, regulators and facilities have made significant progress in controlling both human exposures to contaminants and the migration of contaminated groundwater, but the path toward meeting the challenging, time-consuming, and expensive goal of actually constructing remedies at 95 percent of targeted facilities by 2020 is likely to be more difficult (see fig. 5). Overall, almost 2,300 facilities, or 61 percent, must still complete remedy construction. Of particular note, even though EPA has focused cleanup efforts on high-priority facilities for about 20 years, more than 900 high-priority facilities have yet to complete remedy construction. The majority of officials from EPA regions and states we interviewed agreed that while controlling human exposures to contaminants and controlling the migration of contaminated groundwater were achievable at most facilities by fiscal year 2020, constructing final remedies at 95 percent of the facilities by fiscal year 2020 was unlikely to be achieved. Many of these officials offered reasons that meeting the third 2020 remedy construction goal could be more challenging than the numbers alone would suggest. The officials explained that progress to date has included some “easy” accomplishments for all three goals. Specifically, EPA and the states were able to document that some facilities had controlled human exposures to contaminants, contained migration of contaminated groundwater, and achieved remedy construction by reviewing paperwork and examining records of samples and cleanup activities completed years before. One state official also told us that facilities have been addressed where contaminant releases were limited in scope and quickly investigated. Our August 2000 report made the same observation, noting that a number of stakeholders, including industry representatives and several state regulators, considered the goals at that time to be more of a paperwork exercise—documenting that facilities had contained contamination—than an effort to bring about additional cleanup actions. These observations were further echoed by cases of several high-priority facilities we examined for this report with “easy” accomplishments because the remedies were installed about a decade before EPA established its corrective action program performance goals. For example, Louisiana was able to document that an active wood treatment facility had controlled groundwater contamination by using a pump-and- treat system—wells installed to pump contaminated groundwater to the surface for treating—that the facility had installed in 1991. Similarly, several facilities we reviewed in Pennsylvania had controlled the spread of groundwater by removing soils or installing pump-and-treat systems in the 1990s. By reviewing quarterly groundwater monitoring reports and other documentation, EPA was able to document that these Pennsylvania facilities had controlled human exposures to contaminants and the migration of contaminated groundwater and, in some cases, completed construction before 1999. Likewise, several of the facilities we reviewed in Georgia had controlled human exposures to contaminants and the migration of contaminated groundwater by 1999. In Michigan, on the basis of a review of groundwater monitoring reports that showed no significant problems, the state was able to document that several of the facilities in our review had met corrective action program performance goals. EPA and state officials have acknowledged to us that the facilities that can be characterized as “easy” or “low-hanging fruit” have largely been addressed and will therefore constitute a smaller percentage of the workload that lies ahead. Most of the work has been completed that evaluates the extent to which facilities have controlled human exposures to contaminants and contained the migration of contaminated groundwater (especially at high-priority facilities). The officials explained that the majority of the work ahead will involve selecting and constructing remedies, which in many cases will likely prove more difficult. Several officials also told us that the remedy construction goal will be increasingly harder to attain because the remaining facilities will tend to be larger, more complex, and more labor-intensive to clean up. EPA Region 5 officials in Chicago told us that facilities in their region in particular are not progressing at the same rate as those in other regions, and they would be hard-pressed to meet the remedy construction goal by 2020. More than 20 percent of facilities that have yet to complete remedy construction are located in that heavily industrialized region. The officials predicted that after 2015, the region would likely have an even larger share of facilities yet to complete remedy construction because the other regions will most likely be further along. Their views were substantiated by state officials in Ohio and Michigan. Ohio represents more than 36 percent of Region 5’s remaining workload, and Michigan, 18 percent. State officials in Ohio told us they were uncertain if they could meet the remedy construction goal by 2020, and those in Michigan said they definitely could not reach the remedy construction goal by 2020. The federal and state officials cited the lack of sufficient resources as the primary reason they could not do so. Region 5 and Michigan also cited the bankruptcies of General Motors and Chrysler as increasing their workload in a way that has diverted attention away from facilities on the 2020 list. Officials in EPA regions and the states identified fiscal and human resource constraints as the preeminent challenge for achieving the 2020 goals on time. The technical complexity associated with groundwater remediation may also continue to impede progress, and industry representatives noted that difficulty reaching agreement on the type of groundwater remediation will continue to cause delays in cleanup progress at some facilities. EPA and selected state officials identified resource constraints—both in terms of money and staff—as the preeminent challenge that is likely to impede their future cleanup efforts. The problem will likely worsen if federal, state, or facilities’ fiscal problems deteriorate further. The gap between workload and available resources has affected the progress of the corrective action program since it began. In our previous reports, we cited resource shortfalls as a major barrier to cleanups— shortfalls that have continued to the present day. Specifically, EPA’s funding for program operations in headquarters and the regions has stayed generally the same since fiscal year 2004, with EPA receiving $39 million in fiscal year 2004 and $39 million in fiscal year 2010— effectively a decrease when adjusted for inflation. Officials from several EPA regions we visited noted in particular the impact this flat funding has had on funding available for outside contracts, called contract funds. The regions use contract funds for a variety of purposes, including for monitoring cleanup work at facilities and providing site-specific support to the states. For example, officials from several regions reported using these funds for hiring the Army Corps of Engineers to monitor construction, hiring hydrologists to provide technical assistance, or conducting limited cleanup work at financially struggling or bankrupt facilities. These funds decreased from $5.2 million in fiscal year 2004 to $3.7 million in fiscal year 2010 in nominal dollars. Several regional officials told us that the decrease has limited their ability to oversee work at facilities or assist the states. As funding has decreased, so has the total number of full-time-equivalent EPA employees dedicated to the corrective action program. EPA corrective action program staffing has fallen from 275 full-time equivalents in fiscal year 2004 to 245 in fiscal year 2011. At the same time, according to both headquarters and regional officials we interviewed, corrective action program responsibilities increased. In 2007, for example, EPA shifted the management of cleanup of PCBs (polychlorinated biphenyls) to the office that implements the corrective action program. Several regional officials told us that a renewed emphasis on community outreach has also taken a significant amount of additional staff time. These officials expressed agreement with the principle of community outreach but noted that the new approach had significantly affected their resources. Officials in several regional offices also cited an inability to replace retiring regional staff as an additional problem that has slowed progress toward corrective action goals, noting that over a period of several years, they could replace three experienced staff members with only one new hire. Several officials added that this dilemma continues, with many experienced project managers approaching retirement when the regional offices are tackling remedy selection and construction at some of the most difficult sites. Added to restrictions on the agency’s own spending, EPA’s grants to states have also experienced restrictions. As shown in figure 6, grant funding EPA provides to authorized states to help pay for the corrective action program has remained virtually flat in nominal dollars and decreased somewhat in constant dollars. In 2010 constant dollars, EPA’s corrective action grants to the states totaled $34.9 million in fiscal year 2004 to $31 million in fiscal year 2010, a decrease of 13 percent. Officials in several EPA regions we visited told us that this level of support would not be adequate to keep states on track to achieve the 2020 goals. A representative of the Association of State and Territorial Solid Waste Management Officials noted that grants have not kept pace with inflation, increases in worker salaries, health insurance costs, and increasing workloads. The grants EPA provides to the states only partially support states’ corrective action programs. States are required to supplement the grants with at least $1 for every $3 in federal funds. According to the Association of State and Territorial Solid Waste Management Officials, some states contribute more than the minimum required by the grant. The program’s heavy reliance on state funds helps explain the impact of state governments’ recent budget crises on the program. According to officials in the EPA regional offices we visited, many states in their regions have sustained severe funding shortages, leading to furloughs and hiring freezes. The majority of state officials we interviewed told us that budget problems have led to fewer staff available for the corrective action program, with remaining staff having to absorb heavier workloads, leading to delays in cleanup efforts. Officials in two states specified that limited resources have constrained their ability to visit facilities for oversight purposes and obtain validating samples. According to the Association of State and Territorial Solid Waste Management Officials representative we interviewed, most states have streamlined their corrective action programs to cope with funding or staff cuts. He added that this streamlining, combined with the substantial experience of many state staff, has so far helped dampen the cuts’ effects. He noted, however, that state capacity will likely shrink as experienced workers retire and not all are replaced. EPA headquarters and regional officials and state officials all told us that if program resources continued to decline, they would likely be unable to meet their 2020 goals. EPA headquarters officials explained that they see value in having aggressive goals for the program but have in recent years begun to acknowledge that they may have to adjust them to better reflect the realities of available resources. The agency has not, however, performed a systematic analysis of the funding that would be needed to achieve the goals or to determine how the goals should be adjusted. Headquarters officials explained that predicting funding needs for corrective action is complicated because the workload model reflects 43 states and requires funding from federal, state, and owners’ or operators’ resources. These officials noted that in developing the shorter- term 2005, 2008, and 2015 goals, they did in fact discuss with regional and state officials what could actually be achieved within prescribed time frames. They explained, however, that the long-term 2020 goals— originally developed in 2003—were viewed at the time as reflecting a long-term “vision” and, as such, not warranting a robust analysis of the resources needed to achieve them. With the passage of time, however, what was once viewed as a long-term vision is being increasingly treated as a high-profile, nearer-term target, whose practicality, we believe, should be assessed. According to several regional and state officials we interviewed, economic hardship has also tightened facilities’ own budgets for identifying and constructing remedies. One state official told us that the state has the enforcement tools to compel compliance, but some facilities do not have sufficient cleanup funds. Another state official explained that financial conditions in some industries have translated into a reluctance among facilities to assign as high a priority to cleanup work as in the past. Still another state official told us that whereas the state had previously succeeded in getting facilities to clean up sites with redevelopment potential, the recent economic downturn has reduced this incentive for cleanups. Officials from several regions and states also told us that some facilities within their jurisdictions are bankrupt or nearly so. The sites we reviewed included a number of facilities with funding difficulties. In Georgia, two of the facilities that have not completed final remedy construction lack adequate funds, according to state officials. One of them has struggled to pay for investigation of its groundwater contamination and can pay only $2,000 to $3,000 a year toward it. At the second facility, the state was concerned that contamination may be reaching a nearby stream, so the state and EPA worked together using EPA contract funds to investigate the site and found that contamination was under control. In Louisiana, one site was the location of a large chemical plant, most of which is now closed. Louisiana officials said they are working on cleanup standards for a contaminated groundwater plume and that standards are likely to be strict because the groundwater plume lies over a potential source of drinking water. The officials said that the cleanup will be expensive and that the company will have to budget to complete it. Cleaning up contaminated groundwater is inherently complex, requiring large expenditures and long time periods—many centuries in some cases—according to a 1994 National Research Council report. The report states, however, that it is often difficult to characterize with precision the nature and extent of groundwater contamination, citing as complicating factors the diversity of materials, such as sand, gravels, and solid rock, layered under the ground. For example, water, along with any dissolved contaminants, flows through these materials along pathways that are hard to predict. In addition, organic solvents once used at many hazardous waste sites do not mix with water. Heavier or lighter than groundwater, these chemicals may migrate to or become trapped in inaccessible spaces, adhere to solid particles underground, and remain a source of continuing groundwater contamination. Flushing out such contaminants using conventional pump-and-treat systems can be difficult, time-consuming, costly, and inefficient or impracticable. Alternatives to conventional pump-and-treat systems rely on a variety of biological, chemical, or physical technologies to treat or contain the contaminated groundwater in place underground. Like conventional pump-and-treat methods, alternative technologies can also be time-consuming, although they can potentially reduce costs. Nevertheless, the use of innovative cleanup methods has been limited by technical, institutional, and economic barriers. Given such inherent difficulties, various groups with an interest in groundwater cleanup are critical of the levels some states set for groundwater cleanup and disagree with methodologies proposed by states to meet those standards. EPA’s recommendation that cleanup remedies at groundwater-contaminated sites be selected on the basis of “maximum beneficial use” recognizes both that it may be technically impossible to remediate groundwater contamination at all sites to drinking water standards and that less stringent cleanup levels may be appropriate for groundwater that is not a current or reasonably expected future source of drinking water. Some states, however, designate all groundwater as a current or future source of drinking water, meaning that stringent standards must always be applied. In other states where drinking water standards do not apply to all groundwater, facilities may disagree with regulators about the designation of a particular source of groundwater as drinking water. Such disagreements tend to slow cleanup progress while regulators and facilities spend time negotiating cleanup terms. Illustrating such disagreements, a representative from an industry group representing Fortune 50 companies with whom we met expressed concern that facilities have been required to apply drinking water standards to groundwater remediation efforts in situations where groundwater had not been used for that purpose and was not likely to be used as such in the foreseeable future. He also cited instances in which it was technically impracticable to achieve drinking water standards. The industry representative also noted that facilities may hesitate to install remedies that may not be able to achieve the applicable cleanup level when new or additional systems may later be required as technology advances. He went on to say that, in his view, EPA has not provided enough guidance to states and that some states are not implementing the guidance the agency does provide—for example, guidance about less stringent cleanups or waivers that may be granted to facilities where cleanup to drinking water standards is technically impracticable. EPA officials and some state regulators explained to us that a natural tension exists between regulatory and industry positions. Regulatory officials and the industry representatives agreed that because of the cost and long time frames involved in groundwater cleanup, facilities may be reluctant to invest in groundwater cleanup equipment. In essence, they say that disagreements center on judgments over questions like, “How clean is clean enough?” and on whether, for example, industrial sites should be cleaned up to the same levels as sites in or near residential areas. Disagreements also arise when the final remedy required under the corrective action program is one that contains contaminated groundwater (which requires long-term controls, operation, and monitoring), rather than eliminates groundwater contaminants. We heard from state regulators in Georgia and Michigan that while affected facilities may want to focus only on controlling contamination, regulators may want to see removal or effective treatment in place that eliminates as much of any continuing sources of contamination as possible before emphasizing containment of the remainder. Michigan officials noted that this issue is especially important given the possibility that, in the event of bankruptcies, future long-term management and costs of operating the containment systems may fall to the state, EPA, or both. It is difficult to gauge the extent to which such disagreements may stand in the way of achieving EPA’s 2020 goals. State and EPA regional officials we interviewed said that state standards and procedures would not limit their ability to reach the 2020 goals. Given the state lead on groundwater-related issues, EPA headquarters officials noted that the agency generally defers to state judgment on these issues. Officials from three states with particularly strict groundwater cleanup policies— Georgia, Michigan, and New Mexico—told us that their groundwater policies do not factor into their ability to reach the remedy construction goal by 2020. In fact, in reviewing a draft of this report, Michigan officials noted that addressing concerns raised by the public about the sufficiency of the standards may have more of an impact on their ability to reach the 2020 goals. Officials in Georgia added that, in their experience, less stringent standards do not significantly expedite cleanup. Specifically, officials said that allowing facilities to contain groundwater contamination or restrict its use, rather than remediate it, does not motivate most facilities with a history of delaying corrective action to clean up. The officials also noted that weakening cleanup standards with the hope of increasing the number of facilities that reach the 2020 goals would put facilities that have complied with existing regulations at an economic disadvantage with respect to competitors that have delayed compliance. In contrast, the industry representative cited above acknowledged that groundwater policy decisions are in fact largely state prerogatives, but he maintained that EPA’s failure to more forcefully promote alternatives that were less costly and easier to implement (while still protective of human health and the environment) stood in the way of achieving the 2020 goals at many facilities. EPA, states, and facilities have made significant progress over the past decade in streamlining RCRA corrective action processes, setting performance goals to better direct the corrective action program and accomplishing on-the-ground cleanups of hazardous waste. Nevertheless, resource constraints, the size and cost of the program’s remaining workload, and projected federal and state budget cuts are leading EPA and state regulators to question whether this rate of progress can be sustained. Without realistically taking these factors into account, EPA cannot reliably determine the extent to which the program has the resources it needs to meet its 2020 vision and goals nor better align the 2020 goals with resources it will take to attain them. We acknowledge the complexities associated with a definitive and detailed analysis of the program’s costs, given the number and complexity of cleanups required and the varied federal, state, and industry sources that fund it. Nevertheless, short of an exhaustive, facility- by-facility study, we believe that much useful information can be gained from a more limited effort in which EPA headquarters, EPA regions, and participating states collaborate in an analysis that sheds light on the practicality of the 2020 goals—particularly one that takes into account the recent economic and fiscal events that have affected program participants and the funding they rely on. We believe that such an analysis could provide useful information to senior EPA managers and to Congress and would help inform decisions about the program’s future direction. To sustain progress in the RCRA corrective action program and better align the 2020 program goals with resources it will take to attain them, we recommend that the EPA Administrator direct cognizant officials to assess the agency’s remaining corrective action workload, determine the extent to which the program has the resources it needs to meet these goals, and take steps to either reallocate its resources to the program or revise the goals. We provided a draft of this report to EPA for review and comment; the agency’s written comments are reproduced in appendix IV. EPA’s July 6, 2011, letter stated that the report was accurate in its representation of the corrective action program, noting specifically that it “provides a good summary . . . on the Corrective Action Program, highlights some of the challenges and issues the program faces, and notes initiatives that individual states or regions have taken.” The letter also expressed agreement with the recommendation to assess the program’s workload and potentially make adjustments in either program resources or in program goals. Toward this end, EPA noted that it will “work with its regional offices and authorized state programs to define . . . remaining workloads, identify efficiencies to help with addressing the workload, and strive to use resources in the most focused way possible to achieve these goals.” As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Administrator of EPA, and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to the report are listed in appendix V. Our objectives were to determine (1) the actions the Environmental Protection Agency (EPA) has taken to establish goals for the Resource Conservation and Recovery Act (RCRA) corrective action program and to expedite cleanup; (2) the progress EPA, the states, and facilities have made in meeting performance goals; and (3) the challenges, if any, that EPA, the states, and facilities may face in meeting future cleanup goals. To determine the actions EPA has taken to establish goals for the corrective action program and expedite cleanup, we reviewed relevant EPA strategic plans. We reviewed the process the agency has adopted to establish goals and the methodology used to identify which facilities will be monitored for progress toward meeting those goals. We also reviewed the procedures established to evaluate whether facilities have met the goals. To determine the actions taken by EPA to expedite cleanup, we reviewed applicable guidance and training materials. We also reviewed strategy documents each region prepared to address actions to be taken to meet EPA’s 2020 goals for the program. We obtained the budget for, and number of full-time-equivalent EPA employees dedicated to, the corrective action program for fiscal years 2004 through 2011. To determine the progress EPA, the states, and facilities have made in meeting corrective action performance goals, we reviewed EPA’s fiscal years 2005 and 2008 Performance and Accountability Report to Congress and obtained data from EPA on the status of the corrective action program at the end of fiscal years 2005 and 2008. To determine the current status of the program toward meeting the 2020 goals, we collected and analyzed data from EPA’s national program management and inventory system of hazardous waste handlers, RCRAInfo. This system includes a range of information on treatment, storage, and disposal facilities, including permit and closure status, compliance with federal and state regulations, and cleanup activities. We focused our analysis on the facilities that EPA has identified as part of the universe of facilities to meet its 2020 corrective action performance goals. We determined the number of facilities designated by EPA as having controlled human exposures to contaminants, contained the migration of contaminated groundwater, and constructed final cleanup remedies. We also compared the status of facilities in this group that EPA has designated as high priority with the status of facilities the agency has designated as medium- and low-priority. To illustrate facilities’ cleanup progress, we also grouped facilities into categories generally corresponding with stages in the corrective action process: cleanup not started, facility investigation and contamination control under way, contamination controlled, remedy selected, remedy constructed, and cleanup completed. We assessed the reliability of the RCRAInfo data elements necessary to our engagement by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. To better understand progress made; identify any initiatives by EPA and states to expedite cleanups; and identify challenges EPA, the states, and facilities may face in meeting future cleanup goals, we interviewed officials responsible for the corrective action program at EPA headquarters and at a nonprobability sample of 4 of EPA’s 10 regional offices. We selected the regions because they had the largest caseloads (as determined by the number of facilities subject to the program that are under their jurisdictions). Taken together, the 4 regions—Region 3 in Philadelphia, Region 4 in Atlanta, Region 5 in Chicago, and Region 6 in Dallas—account for approximately 65 percent of facilities EPA has identified that are to meet its goals for the corrective action program. To gain a perspective from a region with a relatively smaller caseload, we also interviewed EPA officials in Region 10 in Seattle. The findings from our interviews at these regional offices cannot be generalized to those we did not include in our nonprobability sample. Within the 5 regions, we visited or spoke with officials from nine states: Alabama, Georgia, Louisiana, Michigan, New Mexico, Ohio, Oregon, Pennsylvania, and Virginia. Except Pennsylvania, these states are authorized to implement the corrective action program (see app. I). We examined a nongeneralizable, random sample of 32 facilities (located in Georgia, Louisiana, Michigan, and Pennsylvania) selected from 1,658 facilities that met criteria set by EPA for facilities deemed to pose a high risk to human health and the environment. We randomly selected the 32 to ensure an objective selection of facilities to examine more closely. We did not generalize our findings from this sample to the population. Included in the information collected about these facilities were the types of activities conducted to reach the goals at the facility and the type of work remaining. In addition, we discussed cleanup challenges with various stakeholder groups. These included the RCRA Corrective Action Project, a group of major corporations with facilities in the corrective action program, represented by attorneys and cleanup managers. We also met with officials from the Association of State and Territorial Solid Waste Management Officials and the Environmental Council of States. We conducted this performance audit from December 2009 through July 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Steven Elstein (Assistant Director), Antoinette Capaccio, Ellen W. Chu, Melinda Cordero, Cindy Gilbert, Brian Hartman, and Leigh McCaskill White made key contributions to this report. | Years of industrial development generated hazardous waste that, when improperly disposed of, poses risks to human health and the environment. To mitigate these risks, Congress passed the Resource Conservation and Recovery Act of 1976 (RCRA). Subtitle C of RCRA, as amended, requires owners or operators to take corrective actions to clean up contamination at facilities that treat, store, or dispose of hazardous waste. The corrective action program is administered by the Environmental Protection Agency (EPA) or states authorized by EPA. GAO was asked by Representative Markey, in his former capacity as Chairman of the House Subcommittee on Energy and Environment, to assess this program. This report discusses (1) actions EPA has taken to establish goals for the program and expedite cleanup; (2) the progress EPA, states, and facilities have made in meeting these goals; and (3) the challenges EPA, states, and facilities face, if any, in meeting future cleanup goals. GAO reviewed and analyzed EPA documents and data and interviewed EPA and state agency officials and stakeholder groups. To focus and streamline the RCRA corrective action program, EPA has over the past decade set a series of progressively more ambitious performance goals and identified which facilities must meet them. Its first set of performance goals, for example--to be achieved in fiscal year 2005--were to control human exposures to contamination and migration of contaminated groundwater at 95 percent of 1,714 "high-risk" facilities. EPA also established a long-range vision for the program, going beyond controlling contamination to cleaning it up. Hence, it targeted 2020 as the year by which 95 percent of 3,747 facilities (expanded from 1,714 to include low- and medium-risk facilities) would have completed construction of all cleanup remedies. EPA also (1) established a process for its regions and authorized states to follow in determining whether facilities undergoing cleanup have met major milestones toward controlling human exposure and preventing the spread of contaminated groundwater and (2) issued guidance to assist in streamlining the corrective action process, maximize program flexibility, and expedite cleanup. EPA, states, and facilities have made considerable progress in meeting corrective action performance goals to control and contain contamination at high-risk facilities. Each of the five EPA regional offices GAO visited cited efforts to improve information on state program status, better estimate remaining work, and identify actions taken to meet the 2020 goals. Several also directly assisted states in assessing whether facilities had controlled contamination. Regional and state offices also reported streamlining reporting requirements and compliance procedures. EPA data show that by the end of fiscal year 2005, the vast majority of high-risk facilities had controlled human exposure to hazards and the migration of contaminated groundwater. Importantly, the EPA data also highlight the challenge facing EPA, states, and facilities in meeting the 2020 goal of constructing final cleanup remedies for 95 percent of the expanded universe of 3,747 facilities. For example, almost three-quarters of these facilities have yet to construct final cleanup remedies. Most EPA and state officials interviewed agreed that the 2020 goal was unlikely to be met. EPA, states, and facilities identified fiscal and human resource constraints and groundwater cleanup as key challenges for achieving the 2020 goals on time. Program cuts resulting from states' fiscal problems and facilities' funding difficulties resulting from the economic downturn have exacerbated resource constraints. Technical complexity associated with groundwater remediation may also impede progress, and disagreements between industry and regulators over groundwater cleanup standards may perpetuate delays. To date, however, EPA has not performed a rigorous analysis of its remaining corrective action workload, including the resources it needs to meet its 2020 goals and the complexity and cost of what remains to be done. Without such an assessment, EPA cannot determine the extent to which the program has the resources it needs to meet these goals. GAO recommends that EPA assess the remaining corrective action workload, determine the extent to which the program has resources needed to meet 2020 goals, and take steps to either reallocate its resources or revise its goals. EPA agreed with the recommendation. |
Over the past decade, there has been a series of devastating and deadly wildland fires on federal lands. These fires burn millions of acres of forests, grasslands, and deserts each year, and federal land management agencies spend hundreds of millions of dollars to fight them. Wildland fires also threaten communities that are near federal lands. During the 2002 fire season, approximately 88,458 wildland fires burned about 6.9 million acres and cost the federal government over $1.6 billion to suppress. These fires destroyed timber, natural vegetation, wildlife habitats, homes, and businesses, and they severely damaged forest soils and watershed areas for decades to come. The 2002 fires also caused the deaths of 23 firefighters and drove thousands of people from their homes. Only 2 years earlier, during the 2000 fire season, approximately 123,000 fires had burned more than 8.4 million acres and cost the federal government over $2 billion. Effectively managing wildland fires can be viewed in terms of a life cycle—there are key activities that can be performed before a fire starts to reduce the risk of its becoming uncontrollable; other activities that can take place during a fire to detect the fire before it gets too large and to respond to it; and still others that can be performed after a fire has stopped in order to stabilize, rehabilitate, and restore damaged forests and rangelands. Pre-fire activities can include identifying areas that are at risk for wildland fire by assessing changes in vegetation and the accumulation of fuels (including small trees, underbrush, and dead vegetation) as well as these fuels’ proximity to communities; taking action to reduce fuels through a variety of mechanisms (including timber harvesting, management-ignited or prescribed fires, mechanical thinning, and use of natural fires); and monitoring fire weather conditions. Other activities during this phase can include providing fire preparedness training and strategically deploying equipment and personnel resources to at-risk areas. Activities that take place during a fire include detecting fires, dispatching resources, planning the initial attack on the fire, monitoring and mapping the fire’s spread and behavior, and planning and managing subsequent attacks on the fire—if they are warranted. Post-fire activities can include assessing the impact of the fire, providing emergency stabilization of burned areas to protect life, property, and natural resources from post-fire degradation, such as flooding, contamination of a watershed area, and surface erosion; rehabilitating lands to remove fire debris, repair soils, and plant new vegetation; and monitoring the rehabilitation efforts over time to ensure that they are on track. Other activities—such as enhancing community awareness—can and should take place throughout the fire management life cycle. Figure 1 depicts a fire management life cycle, with key activities in each phase. Five federal agencies share responsibility for managing the majority of our nation’s federal lands—the Department of Agriculture’s Forest Service (FS) and the Department of the Interior’s National Park Service (NPS), Bureau of Land Management (BLM), Fish and Wildlife Service (FWS), and Bureau of Indian Affairs (BIA). While each agency has a different mission and responsibility for different areas and types of land, they work together to address catastrophic wildland fires, which often cross agency boundaries. In addition, state, local, and tribal governments and private individuals own thousands of acres that are adjacent to federal lands and are similarly susceptible to wildland fires. Figure 2 shows the number of acres of land managed by each of the five federal agencies. After years of catastrophic fires, in September 2000, the Departments of Agriculture and the Interior jointly issued a report on managing the impact of wildland fires. This report forms the basis of what is now known as the National Fire Plan—a long-term multibillion-dollar effort to address the nation’s risk of wildland fires. The plan directs funding and attention to five key initiatives: Hazardous fuels reduction—investing in projects to reduce the buildup of fuels that leads to severe fires. Firefighting—ensuring adequate preparedness for future fires by acquiring and maintaining personnel and equipment and by placing firefighting resources in locations where they can most effectively be used to respond to fires. Rehabilitation and restoration—restoring landscapes and rebuilding ecosystems that have been damaged by wildland fires. Community assistance—working directly with communities to ensure that they are adequately protected from fires. Accountability—establishing mechanisms to oversee and track progress in implementing the National Fire Plan, which includes developing performance measures, processes for reporting progress, and budgeting information. A key tenet of the National Fire Plan is coordination between government agencies at the federal, state, and local levels to develop strategies and carry out programs. Building on this goal of cooperation, the five land management agencies have worked with state governors and other stakeholders to develop a comprehensive strategy and an implementation plan for managing wildland fires, hazardous fuels, and ecosystem restoration and rehabilitation on federal and adjacent state, tribal, and private forest and rangelands in the United States. In developing these integrated plans and initiatives, the land management agencies identified other federal agencies that have roles in wildland fire management: agencies that manage other federal lands, including the Department of Defense and Department of Energy; agencies that research, manage, or use technologies that can aid in wildland fire management, including the Department of the Interior’s U.S. Geological Survey, the National Aeronautical and Space Administration, the Department of Commerce’s National Oceanic and Atmospheric Administration, and the Department of Defense’s National Imagery and Mapping Agency; and agencies with other fire-related responsibilities, including the Department of Homeland Security’s Federal Emergency Management Agency and the Environmental Protection Agency. The integrated plans also identify key state and local organizations that may collaborate on wildland fire management. Over the past four decades, the Departments of Agriculture and the Interior have established an interagency framework to handle wildland fire management—a framework that currently supports the National Fire Plan. In 1965, the Forest Service and the Bureau of Land Management established the National Interagency Fire Center, in Boise, Idaho. The fire center is the nation’s principal management and logistical support center for wildland firefighting and now includes the five land management agencies, the National Weather Service, and the Department of the Interior’s Office of Aircraft Services. The Department of Homeland Security’s Federal Emergency Management Agency and the National Association of State Foresters also have a presence at the center. Working together, representatives from this mix of organizations exchange fire protection information and training services and coordinate and support operations for managing wildland fire incidents while they are occurring, throughout the United States. In 1976, the departments established the National Wildfire Coordinating Group to coordinate government standards for wildland fire management and related programs, in order to avoid duplicating the various agencies’ efforts and to encourage active collaboration among entities. This group comprises representatives from the five land management agencies and from other federal, state, and tribal organizations. Figure 3 identifies these member organizations. The coordinating group seeks to foster more effective execution of each agency’s fire management program through agreements on common training, equipment, and other standards; however, each agency determines whether and how it will adopt the group’s proposals. The group is organized into 15 working teams, which focus on issues including information resource management (IRM), fire equipment, training, fire weather, and wildland fire education. Most recently, the coordinating group established the IRM program management office to further support the IRM working team by developing guidance and products. In addition, the IRM working team has established two subgroups to focus on specific issues involving geospatial information and data administration. In recent years, we have reported that despite these interagency efforts, the Forest Service and the Department of the Interior had not established clearly defined and effective leadership for ensuring collaboration and coordination among the organizations that respond to wildland fires. Further, the National Academy of Public Administration recommended that the Secretaries of Agriculture and the Interior establish a national interagency council to achieve more consistent and coordinated efforts in implementing national fire policies and plans. In response to these concerns, in April 2002, the Secretaries of the two departments established the Wildland Fire Leadership Council. This council comprises senior members of both departments and of key external organizations, and is supported by the Forest Service’s National Fire Plan Coordinator and the Department of the Interior’s Office of Wildland Fire Coordination. The Council is charged with providing interagency leadership and oversight to ensure policy coordination, accountability, and effective implementation of the National Fire Plan and Federal Wildland Fire Management Policy. Figure 4 identifies members of the Leadership Council. Geospatial information technologies—sensors, systems, and software that collect, manage, manipulate, analyze, model, and display information about positions on the earth’s surface—can aid in managing wildland fires by providing accurate, detailed, and timely information to federal, state, and local decision makers, fire- fighting personnel, and the public. This information can be used to help reduce the risk that a fire will become uncontrollable, to respond to critical events while a fire is burning, and to aid in recovering from fire disasters. Specific examples of geospatial technologies include remote sensing systems, the Global Positioning System, and geographic information systems. In addition, specialized software can be used in conjunction with remote sensing data and geographic information systems to manipulate geographic data and allow users to analyze, model, and visualize locations and events. Table 1 describes key geospatial technologies. While individual technologies can be used to obtain information and products, the integration of these technologies holds promise for providing even more valuable information to decision makers. For example, remote sensing systems provide images that are useful in their own right. However, when images are geo-referenced and combined with other layers of data in a geographic information system—and then used with specialized software—a more sophisticated analysis can be performed and more timely and sound decisions can be made. Figure 5 provides an overview of the relationships among the different technologies and some resulting products. The geospatial information technologies mentioned above—remote sensing systems, the Global Positioning System, geographic information systems, and specialized softwareare being used to some extent in managing wildland fires. These technologies are used throughout the wildland fire management life cycle. Key examples follow. Before a fire starts, local and regional land managers often use vegetation and fuels maps derived from remote sensing data in conjunction with a geographic information system to understand conditions and to identify areas for fuels treatments. Some land management offices have also developed software to help them assess risk areas and prioritize fuels treatment projects. For example, figure 6 depicts a vegetation map, and figure 7 depicts a map showing areas with increased risk of fires. Interestingly, an area that the map identified as being at high risk of fire later burned during the Hayman fire of 2002. Land management agencies also use geospatial products related to the weather to aid in fire planning, detecting, and monitoring activities. Weather-based products are derived from ground-based lightning detection and weather observing systems as well as from fire-related weather predictions from the National Weather Service. Figure 8 depicts a seasonal fire outlook, and figure 9 depicts a fire danger map that is based on daily weather predictions. During a fire, some fire responders use satellite and aerial imagery, in combination with Global Positioning System data, geographic information systems, and specialized fire behavior modeling software, to obtain information about the fire and to help plan how they will respond to it. For example, the Forest Service uses satellite data to produce images of active fires. Also, the National Interagency Fire Center manages an aerial infrared program that flies aircraft equipped with infrared sensors over large fires to detect heat and fire areas. These images contribute to the development of daily fire perimeter maps. Figure 10 depicts a satellite image of active fires. Figure 11 depicts a satellite image of a fire perimeter, and figure 12 depicts an aerial infrared image and a fire perimeter map based on that image. Some incident teams also use fire growth modeling software to predict the growth of wildland fires in terms of size, intensity, and spread, considering variable terrain, fuels, and weather. Using this information, incident managers are able to estimate short- and long-term fire behaviors, plan for potential fires, communicate concerns and needs to state and local governments and the public, and request and position resources. Figure 13 shows the output of a fire behavior model. Geospatial technologies are also used to provide information on active fires to the general public. The wildland fire community and the U.S. Geological Survey established an Internet Web site, at www.geomac.gov, to provide access to geospatial information about active fires. This site allows visitors to identify the location of wildland fires on a broad scale and then focus in to identify information on the location and status of specific fires. Figure 14 shows images from the Web site. It is important to note that there are many commercial products and services available for use during a fire—ranging from high- resolution aerial and satellite imagery, to handheld Global Positioning System devices, to enhanced visualization models, to on-site geographic information systems, equipment, and personnel. Incident commanders responsible for responding to fires often choose to purchase commercial products and services to supplement interagency resources. After a fire occurs, burned-area teams have recently begun to use remote sensing data in conjunction with geographic information systems to determine the extent of fire damage and to help plan and implement emergency stabilization and rehabilitation efforts. Typical products include burn severity and burn intensity maps. Figure 15 depicts a satellite image and a burn severity map showing areas that have a high priority for emergency stabilization measures. Geospatial technologies also aid in monitoring rehabilitation efforts for years after a fire to ensure that restoration plans are on track. The Forest Service and Interior are researching and developing new applications of geospatial information technologies to support business needs in wildland fire management. In addition, the Joint Fire Science Program, a partnership of the five land management agencies and the U.S. Geological Survey, funds numerous research projects each year on fire and fuels management. Once again, these initiatives vary greatly—ranging from research on remote sensing systems to the development of interagency information systems with geospatial components, to improvements in existing software models. Examples of these efforts include the following: Sensor research. Several new research projects are under way on LIDAR and hyperspectral sensors. For example, a BLM state office is researching the use of high-resolution hyperspectral and LIDAR imaging technologies for improving the identification of vegetation; planning hazardous fuels projects; and monitoring wildland urban interface projects, the effects of wildland fires, and fire rehabilitation efforts. Additionally, the Forest Service is exploring the use of mobile LIDAR systems for assessing smoke plumes, and it is conducting research on using LIDAR data, satellite data, and modeling techniques to forecast air quality after a fire. Vegetation data and tools. The five land management agencies and the U.S. Geological Survey are working together to develop a national geospatial dataset and a set of modeling tools for wildland fire planning. This effort, called LANDFIRE, is to provide a comprehensive package of spatial data layers, models, and tools needed by land and fire managers. The system is expected to help prioritize, plan, complete, and monitor fuel treatment and restoration projects on national, regional, and local scales. A prototype of the system covers central Utah and Northwestern Montana and is expected to be completed by April 2005. Interagency information systems. The five land management agencies are developing information systems for use by Interior and Forest Service offices to track efforts under the National Fire Plan. The National Fire Plan Operations and Reporting System is an interagency system designed to assist field personnel in managing and reporting accomplishments for work conducted under the National Fire Plan. It is a Web-based data collection tool with geographic information system (GIS) support that locates projects and treatments. It consists of three modules—hazardous fuels reduction, restoration and rehabilitation, and community assistance. While the agencies are currently using the system, it will not be fully operational until 2004. Another information system, the Fire Program Analysis system, is an interagency planning tool for analysis and budgeting to be used by the five federal wildland fire management agencies. The first module—preparedness—is scheduled for implementation in September 2004 and will evaluate the cost effectiveness of alternative initial attack operations in meeting multiple fire management objectives. Additional system modules are expected to provide geospatial capabilities and to address extended attack, large fires and national fire resources, hazardous fuels reduction, wildland fire use, and fire prevention. Improvements in existing systems. There are multiple efforts planned or under way to improve existing systems or to add geospatial components to systems that are currently under development. For example, researchers at a federal fire sciences laboratory are exploring possible improvements to the Wildland Fire Assessment System, an Internet-based system that provides information on a broad area of national fire potential and weather maps for fire managers and the general public. Specifically, researchers are working to develop products that depict moisture levels in live fuels, which will aid in assessing the potential for wildland fires. There are numerous challenges in using geospatial information technologies effectively in the wildland fire community. Key challenges involve data, systems, infrastructure, staffing, and the effective use of new products and technologies—all complicated by the fact that wildland fire management extends beyond a single agency’s responsibility. Data issues. Users of geospatial information have noted problems in acquiring compatible and comprehensive geospatial data. For example, GIS specialists involved in fighting fires reported that they did not have ready access to the geospatial data they needed. They noted that some local jurisdictions have geospatial data, but others do not. Further, they reported that the data from neighboring jurisdictions are often incompatible. Geospatial information specialists reported that the first days at a wildland fire are spent trying to gather the geospatial information needed to accurately map the fire. While concerns with data availability and compatibility are often noted during fire incidents, these issues are also evident before and after fire incidents. For example, we recently reported that the five land management agencies did not know how effective their post-fire emergency stabilization and rehabilitation treatments were because, among other reasons, local land units do not routinely collect comparable information. As a result of unavailable or incompatible data, decision makers often lack the timely, integrated information they need to make sound decisions in managing different aspects of wildland fire. On a related note, the development of data standards is a well- recognized solution for addressing some of the problems mentioned above, but there are currently no nationally recognized geospatial data standards for use on fires. GIS specialists frequently cited a need for common, interagency geospatial data standards for use with fires. They noted that the land management agencies and states do not record information about fires—such as fire location, fire perimeter, or the date of different fire perimeters—in the same way. System issues. In 1996, NWCG reported that there was a duplication of information systems and computer applications supporting wildland fire management, noting that agencies were using 15 different weather-related software applications, 9 logistics applications, and 7 dispatch applications. Since that time, the number of applications has grown—as has the potential for duplication of effort. Duplicative systems not only waste limited funds, but they also make interoperability between systems more difficult. This issue is complicated by the fact that there is no single, comprehensive inventory of information systems and applications that could be of use to others in the interagency wildland fire community. A single comprehensive inventory would allow the wildland fire community to identify and learn about available applications and tools, and to avoid duplicating efforts to develop new applications. We identified five different inventories of software applications—including information systems, models, and tools—that are currently being used in support of wildland fire management. While these listings are not limited to geospatial applications, many of the applications have geospatial components. The most comprehensive listing is an inventory managed by NWCG. This inventory identifies 199 applications used in support of wildland fire, but even this inventory is not complete. That is, it did not include 45 applications that were included in the other inventories. Additionally, it did not include 23 applications that we had identified. Infrastructure issues. Many geospatial specialists noted that there are problems in getting equipment, networking capabilities, and Internet access to the areas that need them during a fire. For example, at a recent fire in a remote location, geospatial specialists reported that they were unable to produce needed information and maps because they had problems with networking capabilities. Again, this issue is critical during a fire, when incident teams try to set up a command center in a remote location. However, it is also an issue when federal regional managers try to obtain consistent information from the different land management agencies’ field offices before or after fires. The majority of local field offices have equipment to support geospatial information and analysis, but some do not. Staffing issues. Geospatial specialists noted that the training and qualifications of the GIS specialists who support fire incidents is not consistent. Specifically, officials noted that skills and qualifications vary widely among those who work with geographic information systems. For example, some GIS specialists are capable of interpreting infrared images as well as developing maps, but others are not. Some have experience working with GIS applications but are not specifically trained to develop GIS maps for fires. Use of new products. While many commercial vendors are developing geospatial products and services that could be of use to the wildland fire community—including advanced satellite and aerial imaging; GIS applications and equipment; and advanced mapping products including analyses, visualization, and modeling—many have expressed concern that the wildland fire community is not aware of these advancements or has little funding for these products. Land managers acknowledged the value of many of these products, but noted that they need to be driven by business needs. Agency officials also expressed concern that the cost of these products and services can be prohibitive and that licensing restrictions would keep them from sharing the commercial data and products with others in the wildland fire community. Clearly, effective interagency management of information resources and technology could help address the challenges faced by the wildland fire community in using geospatial information technologies. Such an approach could address the implementation and enforcement of national geospatial data standards for managing wildland fires; an interagency strategic approach to systems and infrastructure development; a plan for ensuring consistent equipment and training throughout the wildland fire community; and a thorough evaluation of user needs and opportunities for meeting those needs through new products and technologies. The National Wildfire Coordinating Group—comprising representatives from the five land management agencies and from other federal, state, and tribal organizations—has several initiatives planned or under way to address challenges to effectively using geospatial technologies and to improve the interagency management of information resources. However, progress on these initiatives has been slow. In our report, due to be issued in September 2003, we further discuss the use of geospatial technologies in support of wildland fire management, challenges to effectively using these technologies, and opportunities to address key challenges and to improve the effective use of geospatial technologies. We will also make recommendations to improve the use of geospatial technologies in support of wildland fire management. In summary, the federal wildland fire management community is using a variety of different geospatial technologies for activities throughout the fire management life cycle—including identifying dangerous fuels, assessing fire risks, detecting and fighting fires, and restoring fire-damaged lands. These technologies run the gamut from satellite and aerial imaging, to the Global Positioning System, to geographic information systems, to specialized fire models. Local land managers and incident teams often acquire, collect, and develop geospatial information and technologies to meet their specific needs, resulting in a hodgepodge of incompatible and duplicative data and tools. This problem is echoed throughout the fire community, as those who work with different aspects of fire management commonly cite concerns with unavailable or incompatible geospatial data, duplicative systems, lack of equipment and infrastructure to access geospatial information, inconsistency in the training of geospatial specialists, and ineffective use of new products and technologies. These challenges illustrate the need for effective interagency management of information technology and resources in the wildland fire community. We will report on opportunities to improve the use of these technologies in our final report. This concludes my statement. I would be pleased to respond to any questions that you may have at this time. If you have any questions on matters discussed in this statement, please contact David Powner at (202) 512-9286 or by E-mail at pownerd@gao.gov, or Colleen Phillips at (202) 512-6326 or by E- mail at phillipsc@gao.gov. Individuals making key contributions to this statement include Barbara Collier, Neil Doherty, Joanne Fiorino, Chester Joy, Richard Hung, Anjalique Lawrence, Tammi Nguyen, Megan Secrest, Karl Seifert, Lisa Warnecke, and Glenda Wright. Our objectives were to provide an overview of key geospatial information technologies for addressing different aspects of wildland fire management and to summarize key challenges to the effective use of geospatial technologies in wildland fire management. To accomplish these objectives, we focused our review on five key federal agencies that are responsible for wildland fire management on public lands: the Department of Agriculture’s Forest Service and the Department of the Interior’s National Park Service, Bureau of Land Management, Fish and Wildlife Service, and Bureau of Indian Affairs. To identify key geospatial information technologies for addressing different aspects of wildland fire management, we assessed policies, plans, and reports on wildland fire management and technical documents on geospatial technologies. We assessed information on Forest Service and Interior efforts to develop and use geospatial technologies. We also interviewed officials with the Forest Service and the Interior, interagency organizations, commercial vendors, and selected states to determine the characteristics and uses of different geospatial technologies in supporting different phases of wildland fire management. In addition, we met with officials of other federal agencies, including the Department of the Interior’s U.S. Geological Survey, the Department of Defense’s National Imagery and Mapping Agency, the National Aeronautics and Space Administration, the Department of Commerce’s National Oceanic and Atmospheric Administration, and the Department of Homeland Security’s Federal Emergency Management Agency, to identify their efforts to develop geospatial information products in support of wildland fire management. To summarize key challenges to the effective use and sharing of geospatial technologies, we reviewed key reports and studies on these challenges. These include the following: Burchfield, James A., Theron A. Miller, Lloyd Queen, Joe Frost, Dorothy Albright, and David DelSordo. Investigation of Geospatial Support of Incident Management. National Center for Landscape Fire Analysis at the University of Montana. November 25, 2002. Committee on Earth Observation Satellites, Disaster Management Support Group. The Use of Earth Observing Satellites for Hazard Support: Assessments & Scenarios. National Oceanic and Atmospheric Administration, n.d. Department of Agriculture (Forest Service) and Department of Interior. Developing an Interagency, Landscape-scale Fire Planning Analysis and Budget Tool. n.d. . Fairbanks, Frank, Elizabeth Hill, Patrick Kelly, Lyle Laverty, Keith F. Mulrooney, Charlie Philpot, and Charles Wise. Wildfire Suppression: Strategies for Containing Costs. Washington, D.C.: National Academy of Public Administration, September 2002. Fairbanks, Frank, Henry Gardner, Elizabeth Hill, Keith Mulrooney, Charles Philpot, Karl Weick, and Charles Wise. Managing Wildland Fire: Enhancing Capacity to Implement the Federal Interagency Policy. Washington, D.C.: National Academy of Public Administration, December 2001. National Oceanic and Atmospheric Administration. Wildland Fire Management: Some Information Needs and Opportunities. Working paper, National Hazards Information Strategy, July 2002. National Wildfire Coordinating Group. Information Resource Management Strategy Project: Wildland Fire Business Model. National Interagency Fire Center. August 1996. National Wildfire Coordinating Group, Information Resource Management Working Team, Geospatial Task Group. Geospatial Technology for Incident Support: A White Paper. April 12, 2002. | Over the past decade, a series of devastating and deadly wildland fires has burned millions of acres of federal forests, grasslands, and deserts each year, requiring federal and management agencies to spend hundreds of millions of dollars to fight them. GAO was asked to provide an interim update on key segments of an ongoing review of the use of geospatial information technologies in wildland fire management. Specifically, GAO was asked to provide an overview of key geospatial information technologies and their uses in different aspects of wildland fire management and to summarize key challenges to the effective use of these technologies. The final report is expected to be issued in September 2003. GAO's review focused on the five federal agencies that are primarily responsible for wildland fire management: the Department of Agriculture's Forest Service and the Department of the Interior's National Park Service, Bureau of Land Management, Fish and Wildlife Service, and Bureau of Indian Affairs. Geospatial information technologies--sensors, systems, and software that collect, manage, manipulate, analyze, model, and display information about locations on the earth's surface--can aid in managing wildland fires by providing accurate, detailed, and timely information to federal, state, and local decision makers, fire-fighting personnel, and the public. This information can be used to help reduce the risk that a fire will become uncontrollable, to respond to critical events while a fire is burning, and to aid in recovering from fire disasters. However, there are multiple challenges to effectively using these technologies to manage wildland fires, including challenges with data, systems, infrastructure, staffing, and the effective use of new products. Clearly, effective management of information technology and resources could help address these challenges. In our final report, due to be issued next month, we will further discuss geospatial information technologies, challenges to effectively using these technologies, and opportunities to improve the effective use of geospatial information technologies. We will also make recommendations to address these challenges and to improve the use of geospatial technologies in wildland fire management. |
Mr. Chairman and Members of the Subcommittee: I am pleased to be here today to discuss ways of enhancing the usefulness of consultations between executive branch agencies and Congress, as the agencies develop their strategic plans. Under the Government Performance and Results Act (GPRA), each agency is to develop a strategic plan to lay out its mission, long-term goals, and strategies for achieving those goals. Agencies are required to submit their plans to Congress by September 30, 1997. The strategic plans are to take into consideration the views of Congress and other stakeholders. To ensure that these views are taken into account, GPRA requires agencies to consult with Congress and solicit the views of other stakeholders as they develop their strategic plans. These consultations provide an important opportunity for Congress and the executive branch to work together to ensure that agency missions are focused, goals are specific and results-oriented, and strategies and funding expectations are appropriate and reasonable. In previous testimony before the full Committee on February 12, we identified examples of management-related challenges stemming from unclear agency missions; the lack of results-oriented performance goals; the absence of well-conceived strategies to meet those goals; and the failure to gather and use accurate, reliable, and timely program performance and cost information to measure progress in achieving results. We also described how GPRA can assist Congress and the executive branch in addressing these challenges and improving the management of federal agencies. OMB to executive agencies sent last November and earlier guidance to agencies on the preparation of strategic plans. This willingness on the part of Congress and the administration to work together is a likely precondition to successful consultations. Nonetheless, the consultations may still prove difficult because they entail a different working relationship between agencies and Congress than has generally prevailed in the past. In a forthcoming report, we will compare and contrast key design elements and approaches of GPRA with those of past federal initiatives that sought to link resources to results, such as the Planning-Programming-Budgeting System (PPBS) and Zero-Base Budgeting (ZBB). One clear lesson that emerged from those prior initiatives is that constructive communication across the branches of government is difficult, but absolutely essential if management reform is to be sustained. Discussions between agencies and Congress on strategic planning are likely to underscore the competing and conflicting goals of many federal programs, as well as the sometimes different expectations of the legislative and executive branches. Over the past few months, we have been asked to help brief a number of congressional committees on GPRA and, in some cases, directly assist them in their consultations with agencies. Building in part on that effort, and at the request of the Chairman of the House Budget Committee, we have been examining selected consultations on strategic plans that have taken place thus far. As part of related work we were doing in January looking at agencies’ progress in developing strategic plans, officials at the headquarters level, from 11 of the 24 largest executive branch agencies, said that they had been in contact with congressional committees—often at the initiative of Congress—on their strategic plans. Headquarters-level officials in the remaining 13 executive branch agencies said that although they had not met with congressional staff, officials from some of their components had met with authorizing committees and appropriating subcommittees on matters related to strategic planning. executive agencies who participated in those consultations. All of the selected consultations took place before the congressional letter was sent in late February. Our work was aimed at identifying approaches that, in the view of congressional staff and agency officials, have the potential to enhance the usefulness of the consultations required by GPRA. As agreed with the Chairman of the House Budget Committee and this Subcommittee, I will discuss the results of that work today. Congressional staff and agency officials expressed a widespread appreciation for the essential role that consultations can play in the development of a strategic plan that is useful to the agency and appropriately takes into account the views of Congress. Although GPRA requires congressional consultations, it does not specify what constitutes a consultation, at what point in the development process of a strategic plan the consultation or consultations should take place, or which committees should be involved in consultations. Establishing a set of best practices or reaching a common understanding of what consultations will entail can help ensure that the consultations are as productive as possible. However, congressional staff and agency officials said they believed that because of their generally limited experience with such consultations, it will take time for Congress and agencies to develop a base of common experiences from which to build a set of specific best practices for future consultations. Most committee staff and agency officials had positive comments about the meetings that have been held thus far. However, both committee staff and agency officials—committee staff in particular—stressed the very limited nature of the meetings. The meetings varied significantly, ranging from routine base-touching sessions with congressional staff as part of an agency’s broad scan of internal and external stakeholders, to substantive and candid dialogue on an agency’s mission, strategic goals, strategies to achieve those goals, and outcome-related performance measures. Most committee staff and some agency officials we spoke with characterized the meetings that have taken place thus far as briefings, preconsultations, or preliminary consultations. Thus, at this early point, no single set of best practices for consultations has emerged from the preliminary meetings. Instead, committee staff and agency officials suggested some general approaches that center on the creation of shared expectations between committee staff and agency officials that may contribute to the usefulness of such consultations. By working together to create shared expectations, consultation participants can establish an understanding of what they want to discuss, what they do not want to enter into the discussions, and what they expect to achieve from their discussions. To avoid misunderstandings and consequent disappointment, both committee staff and agency officials identified a need to define “up front” what they expect to achieve from consultations. For example, one committee staff member said that he asked for and expected to receive background information in the initial meeting with an agency about what the agency had done to achieve the requirements of GPRA, and that his expectations were met. However, in another case, two committee staff who asked for and expected a discussion on an agency’s mission statement, its consistency with statute, and its relationship to the agency’s strategic goals, among other things, were disappointed. Instead, they received a 1-1/2 hour slide show on the requirements of GPRA, even though they had told the agency beforehand that they did not need such a presentation. The congressional letter provided guidelines that are intended to make consultations more productive. For example, the letter described expectations for the contents of draft strategic plans and said that agencies should provide relevant materials in advance of consultations. The congressional letter also provided a list of the types of topics that the congressional majority expects to be discussed during consultations. Our work suggests that the guidelines in the congressional letter should go a long way toward assisting committees and agencies in conducting their consultations by helping to establish a shared understanding of the congressional majority’s expectations. For example, two committee staff members told us that they encouraged agencies to provide them with relevant documents, including early drafts of strategic plans, before the meetings. This enabled them to prepare questions and suggestions in advance. It also helped them focus better on the presentations and discussions taking place during the meetings by eliminating the need to read and respond to the documents at the same time. Another committee staff member stressed the importance of limiting the materials provided as part of consultations to critical documents, because congressional staff workloads severely constrain the time available to read additional paperwork. individual experiences and needs of congressional committees and agencies. More specifically, congressional staff and agency officials noted that the historical relationships between an agency and Congress, the strategic issues confronting the agency, and the degree of policy agreement or disagreement within Congress and between Congress and the administration on those strategic issues will heavily influence the way consultations are carried out. They also noted that these political differences will affect the probability of success of the consultations from either a the congressional or agency perspective. For example, one committee staff member said that major disagreements existed between the political parties as to the basic direction of an agency under his committee’s jurisdiction. According to this staff member, when subcommittee staff met with this agency’s officials, the discussion quickly became quite confrontational, and the session only served to reinforce tensions rather than resolve them. To avoid repeating this situation, the staff member has sought to focus subsequent meetings on elements of the agency’s strategic plan on which the possibility for consensus exists, such as how best to manage programs, and either leave issues arising from contentious policy differences for later consideration or address them through correspondence with the agency. The staff member contrasted the consultations with this agency with those engaged in with another agency, also under the jurisdiction of his committee, where broad agreement existed between the Members of the committee and agency officials on the appropriate goals for the agency and how those goals should be met. In this case, he said the consultation process differed significantly in process and tone from the one in which strong differences existed on basic policy issues. well acquainted with GPRA; they therefore said that such briefings would be a waste of time. In addition, these latter staff members said that agencies should encourage follow-up questions after each meeting and feedback on what went well and what did not go well during the meeting. Our discussions with committee staff and agency officials suggest that as committees and agencies work together to create shared expectations, some general approaches may contribute to the usefulness of the consultations. These approaches include the need for engaging the right people, addressing differing views of what is to be discussed, and establishing a consultation process that is iterative. Including people who are knowledgeable about the topic at hand is obviously important to any meeting. Almost everyone we talked with, both committee staff and agency officials, stressed the importance of having agency officials who can answer specific program-related questions attend the consultations, as well as officials with authority to revise the agency’s strategic plans. Otherwise, as both committee staff and agency officials said, consultations run the risk of becoming purely a staff-driven exercise that lacks a real link to agency management decisions. According to committee staff, agency officials with varying responsibilities need to be involved in consultations. For example, two committee staff members observed that, initially, agency consultations with congressional staff should include, at a minimum, officials with direct program responsibility in agencies, as well as individuals from agency staff offices with general planning responsibilities. According to the committee staff members, the direct involvement of program-level agency officials is important in order to demonstrate that decisions made as part of the strategic planning process are serving as a basis for daily operations within the agency. These staff members noted that a measure of GPRA’s success is the identification of program officials who are able to (1) clearly show how their program goals are directly linked to agency strategic goals and (2) demonstrate how they are using GPRA to manage their operations. According to the committee staff members, the involvement of program officials also is more likely to ensure that consultations are informative for both Congress and the agency. Staff from two committees underscored the importance of including in the consultations congressional staff who have knowledge of GPRA, strategic planning, and the ways Congress can use GPRA to aid its decisionmaking. They also noted that staff who could discuss the intricacies of agency programs and who had strong public policy and finance backgrounds also should be brought in to the consultations to analyze the plans and the supporting documentation that agencies provided. As the consultations proceed, according to committee staff, the involvement of Members of Congress and senior management within agencies is important because Members and senior managers are ultimately responsible for making decisions about agency strategic directions and the level of program funding. In addition, staff said the involvement of senior management demonstrates their personal commitment and, in cases where that commitment may not be present, is helpful to building that commitment. For example, one committee staff member said that the higher the level of agency management involved in consultations, the better the quality of the agency testimonies at oversight hearings and the greater the importance given to GPRA and the strategic planning process within the agencies. A staff member from another committee said that true consultation cannot take place without engaging Members of Congress. He said that committee staff should be involved in the initial briefings but that, as discussions progressed, Members needed to be directly involved. Member involvement could be obtained in a number of ways in addition to active participation in consultation sessions. For example, Members could send letters to agencies posing questions on strategic plans and formally documenting their views on key issues. Another staff member said that hearings are important because not only do they result in Member involvement, but they also require the participation of senior agency management. In that regard, a number of House committees are considering holding hearings this spring, after at least some consultations have taken place, in order to provide oversight on agency GPRA efforts and as a way of creating a public record of agreements reached during consultations. interested congressional committees in a coordinated approach to consultations can be challenging. The often overlapping or fragmented nature of federal program efforts—a problem that has been extensively documented in our work—underscores the importance of a coordinated consultation process. In that regard, the effort now under way in the House to form teams of congressional staff from different committees to have a direct role in the consultation process should prove helpful. From our discussions with committee staff and agency officials, it was not apparent that there was consistency in the meetings that have been held thus far. Some agencies have met with their authorizing committees; others with their appropriators. Of the five House committees whose staff we interviewed, four committees included minority staff in their meetings. And although some House committee staff attempted to include Senate staff and staff from other House committees, their attempts thus far have met with only limited success. Committee staff and agency officials often favored agencies’ obtaining the views of other stakeholders in developing draft strategic plans before congressional consultations took place. One committee staff member said that stakeholders could provide information that could help an agency show a link between the achievement of its programs’ strategic goals and the resources required to achieve them. An agency official said that stakeholders have helped to identify the major strategic issues facing his agency. For example, he said that stakeholders helped to identify perceived strengths, weaknesses, opportunities, and challenges that would be involved in making strategic changes and achieving his agency’s goals. In addition, he said that stakeholders also helped identify future strategic issues and ways to address those issues through strategic planning. Committee staff and agency officials often presented differing views on what they believed the level of detail discussed during consultations should be. Congressional staff, on the whole, wanted a deeper examination of the details of agency strategic plans. Specifically, some staff wanted to know how programs support an agency’s achievement of its strategic goals and how the achievement of the agency’s goals would be determined. In contrast, other congressional staff noted that because some agencies lack baseline and trend data needed to establish performance goals, it is not possible to discuss program performance measures. Therefore, the staff noted the consultations needed to focus on the process of agencies’ strategic planning efforts, such as planning schedules and time frames and building capacity. Some agency officials, however, said that it was their general impression that the consultations were to concern only their strategic plans, not issues related to specific programs. As a result, these agency officials said they wanted the discussions kept at a higher level—for example, on agency mission and strategic goals. These officials said that they did not believe that the consultation was a forum for discussing program performance goals, measures, and costs. Other agency officials, however, observed that agencies should be prepared to provide information on programmatic issues as well as missions and goals. Most committee staff agreed with this latter view, saying that agencies need to be prepared to engage in discussions that go beyond mission and goals to the program level and the rationale for specific performance measures. For example, two committee staff members said that for agencies to provide a list of goals—whether program performance goals or strategic goals—without data to show why those goals were chosen and how progress toward achieving the goals would be measured, was meaningless. One of the two staff members said agency officials need to ensure that their officials understand the importance of having data to support their strategic planning efforts and of supplying those supporting data to Congress as part of their consultations. The other staff member explained that one reason Members and committee staff needed such information was to enable them to intelligently assist agencies in selecting appropriate performance measures. All of the committee staff and agency officials we spoke with acknowledged that they had just begun an iterative process that will take time to complete. In addition, both committee staff and agency officials recognized that GPRA-required consultations were new and would require a learning period. As a result, all staff and officials agreed that they should meet as many times as both sides feel is necessary. This point is echoed in the congressional letter to the Director of OMB, which emphasizes that agency officials and committee staff may need to continually work on updated versions of the strategic plans. and totally productive. A committee staff member added that agencies need to have a constant dialogue with congressional staff. Finally, an agency official said that all consultation participants must accept that to be useful, the strategic plan must be viewed as a dynamic document, subject to change and open to criticism by all participants. In summary, Mr. Chairman, both committee staff and agency officials we spoke with recognized that the consultations on strategic planning are important to developing an agency plan that appropriately takes into account the views of Congress. However, as is to be expected during the initial stages of a new effort, all participants are struggling to define how the consultation process can work effectively. As I mentioned, the letter from Congress to OMB should be particularly helpful in this regard. In our discussions with committee staff and agency officials, they noted some general approaches, including engaging the right people, addressing differing views of what is to be discussed, and establishing a consultation process that is iterative, that may contribute to the usefulness of consultations. Ultimately, these approaches, along with other practices that may emerge as agency officials and committee staff continue to learn to work together in developing strategic plans, can help create a basic understanding among the stakeholders of the competing demands that confront most agencies and congressional staff, the limited resources available to them, and how those demands and resources require careful and continuous balancing. We look forward to continuing to work with you and other committees on GPRA. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed ways to enhance the usefulness of consultations between executive branch agencies and Congress as the agencies develop their strategic plans, as required by the Government Performance and Results Act (GPRA). GAO noted that: (1) although GPRA requires congressional consultations, it does not specify what constitutes a consultation, at what point in the development process of a strategic plan the consultations should take place, or which committees should be involved in consultations; (2) both committee staff and agency officials GAO interviewed recognize that the consultations on strategic planning are important to developing an agency plan that appropriately takes into account the views of Congress; (3) however, as is to expected during the initial stages of a new effort, all participants are struggling to define how the consultation process can work effectively; (4) although the establishment of a set of best practices, or the attainment of common understandings of what consultations will entail, can help ensure that those consultations are as productive as possible, no single set of best practices has yet emerged; (5) instead, GAO's work on preliminary consultations suggested some general approaches that may contribute to the usefulness of future consultations, including: (a) creating shared expectations; (b) engaging the right people; (c) addressing differing views of what is to be discussed; and (d) establishing a consultation process that is iterative; (6) a recent letter to the Director of the Office of Management and Budget from the Speaker of the House, the House Majority Leader, the Senate Majority Leader, and key committee chairmen from both the House and the Senate on GPRA-required consultations should provide a good foundation for successful consultations; (7) ultimately, the guidelines included in the letter, the approaches GAO identified, and other practices that may emerge as agency officials and committee staff continue to learn to work together in developing strategic plans, can help create a set of practices that promote successful consultations; and (8) successful consultations, in turn, can promote a basic understanding among the stakeholders of the competing demands that confront most agencies and congressional staff, the limited resources available to them, and how those demands and resources require careful and continuous balancing. |
In fiscal year 2004, much of our work examined the effectiveness of the federal government’s day-to-day operations, such as administering benefits to the elderly and other needy populations, providing grants and loans to college students, and collecting taxes from businesses and individuals. Yet, we remained alert to emerging problems that demanded the attention of lawmakers and the public. For example, we continued to closely monitor developments affecting the Iraq war, defense transformation, homeland security, social security, health care, the U.S. Postal Service, civil service reform, and the nation’s private pension system. We also informed policymakers about long-term challenges facing the nation, such as the federal government’s financial condition and fiscal outlook, new security threats in the post-cold war world, the aging of America and its impact on our health care and retirement systems, changing economic conditions, and the increasing demands on our infrastructure—from highways to water systems. We provided congressional committees, members, and staff with up-to-date information in the form of reports, recommendations, testimonies, briefings, and expert comments on bills, laws, and other legal matters affecting the federal government. We performed this work in accordance with the GAO Strategic Plan for serving the Congress, consistent with our professional standards, and guided by our core values. See appendix I for our Strategic Plan Framework for serving the Congress and the nation. In fiscal year 2004, our work generated $44 billion in financial benefits, primarily from recommendations we made to agencies and the Congress (see fig. 1). Of this amount, about $27 billion resulted from changes to laws or regulations, $11 billion resulted from agency actions based on our recommendations to improve services to the public, and $6 billion resulted from improvements to core business processes, both governmentwide and at specific agencies, resulting from our work (see fig. 2). Our findings and recommendations produce measurable financial benefits for the federal government when the Congress or agencies act on them. The funds that are saved can then be made available to reduce government expenditures or be reallocated to other areas. The monetary effect realized can be the result of changes in business operations and activities; the structure of federal programs; or entitlements, taxes, or user fees. For example, financial benefits could result if the Congress were able to reduce its annual cost of operating a federal program or lessen the cost of a multiyear program or entitlement. Financial benefits could also result from increases in federal revenues—due to changes in laws, user fees, or sales—that our work helped to produce. Financial benefits included in our performance measures are net benefits—that is, estimates of financial benefits that have been reduced by the costs associated with taking the action that we recommended. Figure 3 lists several of our major financial benefits for fiscal year 2004 and briefly describes some of our work contributing to financial benefits. Many of the benefits that result from our work cannot be measured in dollar terms. During fiscal year 2004, we recorded a total of 1,197 other benefits (see fig. 4). We documented 74 instances where information we provided to the Congress resulted in statutory or regulatory changes, 570 instances where federal agencies improved services to the public, and 553 instances where agencies improved core business processes or governmentwide reforms were advanced (see fig. 5). These actions spanned the full spectrum of national issues, from ensuring the safety of commercial airline passengers to identifying abusive tax shelters. See figure 6 for examples of other benefits we claimed as accomplishments in fiscal year 2004. At the end of fiscal year 2004, 83 percent of the recommendations we made in fiscal year 2000 had been implemented (see fig. 7), primarily by executive branch agencies. Putting these recommendations into practice is generating tangible benefits for the American people. As figure 8 indicates, agencies need time to act on our recommendations. Therefore, we assess recommendations implemented after 4 years, the point at which experience has shown that, if a recommendation has not been implemented, it is not likely to be. During fiscal year 2004, experts from our staff testified at 217 congressional hearings (see fig. 9) covering a wide range of complex issues. For example, our senior executives testified on the financial condition of the Pension Benefit Guaranty Corporation’s single-employer program, the effects of various proposals to reform Social Security’s benefit distributions, and enhancing federal accountability through inspectors general. Nearly half of our testimonies were related to high-risk areas and programs. See figure 10 for a summary of issues we testified on, by strategic goal, in fiscal year 2004. Issued to coincide with the start of each new Congress, our high-risk update lists government programs and functions in need of special attention or transformation to ensure that the federal government functions in the most economical, efficient, and effective manner possible. Our latest report, released in January 2005, presents the status of high-risk areas identified in 2003 and lists new high-risk areas warranting attention by the Congress and the administration. In January 2003, we identified 25 high-risk areas; in July 2003, a twenty- sixth high-risk area was added to the list (see table 1). Since then, progress has been made in all areas, although the nature and significance of progress varies by area. Federal departments and agencies, as well as the Congress, have shown a continuing commitment to addressing these high- risk challenges and have taken various steps to help correct several of their root causes. GAO has determined that sufficient progress has been made to remove the high-risk designation from the following three areas: student financial aid programs, FAA financial management, and Forest Service financial management. Also, four areas related to IRS have been consolidated into two areas. This year, we designated four new high-risk areas. The first new area is establishing appropriate and effective information-sharing mechanisms to improve homeland security. Federal policy creates specific requirements for information-sharing efforts, including the development of processes and procedures for collaboration between federal, state, and local governments and the private sector. This area has received increased attention, but the federal government still faces formidable challenges sharing information among stakeholders in an appropriate and timely manner to minimize risk. The second and third new high-risk areas are, respectively, DOD’s approach to business transformation and its personnel security clearance program. GAO has reported on inefficiencies and inadequate transparency and accountability across DOD’s major business areas, resulting in billions of dollars of wasted resources. Senior leaders have shown commitment to business transformation through individual initiatives in acquisition reform, business modernization, and financial management, among others, but little tangible evidence of actual improvement has been seen to date in DOD’s business operations. DOD needs to take stronger steps to achieve and sustain business reform on a departmentwide basis. Further, delays by DOD in completing background investigations and adjudications can affect the entire government because DOD performs this function for hundreds of thousands of industry personnel from 22 federal agencies, as well as its own service members, federal civilian employees, and industry personnel. The Office of Personnel Management (OPM) is to assume DOD’s personnel security investigative function, but this change alone will not reduce the shortages of investigative personnel. The fourth high-risk area is management of interagency contracting. Interagency contracts can leverage the government’s buying power and provide a simplified and expedited method of procurement. But several factors can pose risks, including the rapid growth of dollars involved combined with the limited expertise of some agencies in using these contracts as well as recent problems related to their management. Various improvement efforts have been initiated to address interagency contracting, but improved policies and processes, and their effective implementation, are needed to ensure that interagency contracting achieves its full potential in the most effective and efficient manner. Lasting solutions to high-risk problems offer the potential to save billions of dollars, dramatically improve service to the American public, strengthen public confidence and trust in the performance and accountability of our national government, and ensure the ability of government to deliver on its promises. In fiscal year 2004, we issued 218 reports and delivered 96 testimonies related to our high-risk areas and programs, and our work involving these areas resulted in financial benefits totaling over $20 billion. This work, for example, included 13 reports and 10 testimonies examining problems with DOD’s financial management practices, such as weak internal controls over travel cards, inadequate management of payments to the Navy’s telecommunications vendors, and abuses of the federal tax system by DOD contractors, resulting in $2.7 billion in financial benefits. In addition, we documented $700 million in financial benefits based on previous work and produced 7 reports and 4 testimonies focusing on, for example, improving Social Security Administration and Department of Energy processes that result in inconsistent disability decisions and inconsistent benefit outcomes. Shortly after I was appointed in November 1998, I determined that GAO should undertake a major transformation effort to better enable it to “lead by example” and better support the Congress in the 21st century. This effort is consistent with the House Report 108-577 on the fiscal year 2005 legislative branch appropriation that focuses on improving the efficiency and effectiveness of operations at legislative branch agencies. H.Rpt. 108-577 directed GAO to work closely with the head of each legislative branch agency to identify opportunities for streamlining, cross- servicing and outsourcing, leveraging existing technology, and applying management principles identified as “best practices” in comparable public and private sector enterprises. H.R. 108-577 also directed the legislative branch agencies to be prepared to discuss recommended changes during the fiscal year 2006 appropriations hearing cycle. Our agency transformation effort has enabled GAO to become more results-oriented, partnerial, client-focused, and externally aware, and less hierarchical, process-oriented, “siloed,” and internally focused. The transformation resulted in reduced organizational layers, fewer field offices, the elimination of duplication in several areas, and improved our overall resource allocation. We began our transformation effort by using the GAO Strategic Plan as a framework to align our organization and its resources. On the basis of the strategic plan, we streamlined and realigned the agency to eliminate a management layer, consolidated 35 issue areas into 13 teams, and reduced our field offices from 16 to 11. We also eliminated the position of Regional Manager—a Senior Executive Service level position—in the individual field offices and consolidated the remaining field offices into three regions—the eastern region, the central region, and the western region, each headed by a single senior executive. Following the realignment of our mission organization and field offices, GAO’s support organizations were restructured and centralized to eliminate duplication and to provide human capital, report production and processing, information systems desk-side support, budget and financial management, and other services more efficiently to agency staff. This has resulted in a 14 percent reduction in our support staff since 1998. As shown in figure 11, these and subsequent measures improved the “shape” of the agency by decreasing the number of mid-level managers and by increasing the number of entry level and other staff with the skills and abilities to accomplish our work. During my tenure, GAO has outsourced and cross-serviced many administrative support activities, which has allowed GAO to devote more of its resources to mission work. In fiscal year 2004, about two-thirds of our nonhuman capital costs were spent to obtain critical mission support services for about 165 activities from the private and public sectors through outsourcing. Outsourcing contracts include a wide range of mission support activities, including information technology systems development, maintenance, and support; printing and dissemination of GAO products; operation and maintenance of the GAO Headquarters building; information, personnel, and industrial security activities; records management; operational support; and audit service support. GAO also meets many of its requirements through cross-servicing arrangements with other federal agencies. For example, GAO uses the Department of Agriculture’s National Finance Center to process its personnel/payroll transactions. Also, GAO uses the legislative branch’s long-distance telephone contract, which has resulted in continual reductions in long- distance rates. GAO also uses a wide range of contracting arrangements available in the executive branch for procuring major information technology (IT) services. GAO also uses the Library of Congress’ Federal Library and Information Network to procure all of its commercial online databases. Currently, as shown in figure 12, over 50 percent of our staff resources in the support area are contractors, allowing us to devote more of our staff resources to our mission work. We recently surveyed managers of agency mission support operations and identified additional activities that potentially could be filled through alternative sourcing strategies. In fiscal years 2005 and 2006, we will assess the feasibility of alternative sourcing for these activities using an acquisition sourcing maturity model and cost- benefit analyses. Utilizing IT effectively is critical to our productivity, success, and viability. We have applied IT best management practices to take advantage of a wide range of available technologies such as Web-based applications and Web-enabled information access, as well as modern, mobile computing devices such as notebook computers to facilitate our ability to carry out our work for the Congress more effectively. We make wide use of third- party reviews of our practices and have scored well in measurement efforts such as total cost of ownership, customer service, and application development. In fiscal year 2002, an independent study of GAO’s IT processes and related costs revealed that, “GAO is delivering superb IT application support and development services to the business units at 29 percent less than the cost it would take the Government peer group to deliver.” In confirmation of these findings, in fiscal year 2003, GAO was one of only three federal agencies to receive the CIO Magazine 100 Award for excellence in effectively managing IT resources to obtain the most value for every IT dollar. We were named to the CIO Magazine’s “CIO 100” for our excellence in managing IT resources in both 2003 and 2004. Because one of our strategic goals is to maximize our value by serving as a model agency for the federal government, we adopt best practices that we have suggested for other agencies, and we hold ourselves to the spirit of many laws that are applicable only to the executive branch. For example, we adhere to the best practices for results-oriented management outlined in the Government Performance and Results Act (GPRA). We have strengthened our financial management by centralizing authority in a Chief Financial Officer with functional responsibilities for financial management, long-range planning, accountability reporting, and the preparation of audited financial statements, as directed in the Chief Financial Officers Act (CFO Act). Also, for the eighteenth consecutive year, independent auditors gave GAO’s financial statements an unqualified opinion with no material weaknesses and no major compliance problems. In the human capital area, we are clearly leading by example in modernizing our policies and procedures. For example, we have adopted a range of strategic workforce policies and practices as a result of a comprehensive workforce planning effort. Among other things, this effort has resulted in greatly upgrading our workforce capacity in both IT and health care policy. We also have updated our performance management and compensation systems and our training to maximize staff effectiveness and to fully develop the potential of our staff within both current and expected resource levels. We are requesting budget authority of $493.5 million for fiscal year 2006. This budget request will allow us to continue to maximize productivity, operate more effectively and efficiently, and maintain the progress we have made in technology and other areas. However, it does not allow us sufficient funding to support a staffing level of 3,269—the staffing level that we requested in previous years. In preparing this request, we conducted a baseline review of our operating requirements and reduced them as much as we felt would be prudent. However, with about 80 percent of our budget composed of human capital costs, we needed to constrain hiring to keep our fiscal year 2006 budget request modest. We plan to use recently enacted human capital flexibility from the GAO Human Capital Reform Act of 2004 as a framework to consider such cost savings options as conducting one or more voluntary early retirement programs and we also plan to review our total compensation policies and approaches. There are increasingly greater demands on GAO’s resources. Since fiscal year 2000, we have experienced a 30 percent increase in the number of bid protest filings. We expect this workload to increase over the coming months because of a recent change in the law that expands the number of parties who are eligible to file protests. In addition, the number of congressional mandates for GAO studies, such as our reviews of executive branch and legislative branch operations, has increased more than 15 percent since fiscal year 2000. While we have reduced our planned staffing level for fiscal years 2005 and 2006, we believe that the staffing level we requested in previous years is a more optimal staffing level for GAO and would allow us to successfully meet the future needs of the Congress and provide the return on investment that the Congress and the American people expect. We will be seeking your commitment and support to provide the funding needed to rebuild our staffing levels over the next few fiscal years, especially as we approach a point where we may be able to express an opinion on the federal government’s consolidated financial statements. Given current and projected deficits and the demands associated with managing a growing national debt, as well as challenges facing the Congress to restructure federal programs, reevaluate the role of government, and ensure accountability of federal agencies, a strong GAO will result in substantially greater benefits to the Congress and the American people. Table 2 summarizes the changes we are requesting in our fiscal year 2006 budget. Our budget request supports three broad program areas: Human Capital, Mission Operations, and Mission Support. In our Human Capital program, to ensure our ability to attract, retain, and reward high-quality staff and compete with other employers, we provide competitive salaries and benefits, student loan repayments, and transit subsidy benefits. We have undertaken reviews of our classification and compensation systems to consider ways to make them more market-based and performance-oriented and to take into consideration market data for comparable positions in organizations with which we compete for talent. Our rewards and recognition program recognizes significant contributions by GAO staff to the agency’s accomplishments. As a knowledge-based, world-class, professional services organization in an environment of increasingly complex work and accelerating change, we maintain a strong commitment to staff training and development. We promote a workforce that continually improves its skills and knowledge. We plan to allocate funds to our Mission Operations program to conduct travel and contract for expert advice and assistance. Travel is critical to accomplishing our mission. Our work covers a wide range of subjects of congressional interest, plays a key role in congressional decision making, and can have profound implications and ramifications for national policy decisions. Our analyses and recommendations are based on original research, rather than reliance on third-party source materials. In addition, GAO is subject to professional standards and core values that uniquely position the agency to support the Congress in discharging its oversight and other responsibilities under the Constitution. We use contracts to obtain expert advice and or assistance not readily available within GAO, or when expertise is needed within compressed time frames for a particular project, audit, or engagement. Examples of contract services include obtaining consultant services, conducting broad- based studies in support of audit efforts, gathering key data on specific areas of audit interest, and obtaining technical assistance and expertise in highly specialized areas. Mission Support programs provide the critical infrastructure we need to conduct our work. Mission support activities include the following programs: Information Technology: Our IT plan provides a road map for ensuring that IT activities are fully aligned with and enable achievement of our strategic and business goals. The plan focuses on improved client service, IT reliability, and security; it promotes effectiveness, efficiency and cost benefit concepts. In fiscal years 2005 and 2006, we plan to continue to modernize outdated management information systems to eliminate redundant tasks, automate repetitive tasks, and increase staff productivity. We also will continue to modernize or develop systems focusing on how analysts do their work. For example, we enhanced the Weapons Systems Database that we created to provide the Congress information to support budget deliberations. Building Management: The Building Management program provides operating funds for the GAO Headquarters building and field office locations, safety and security programs, and asset management. We periodically assess building management components to ensure program economy, efficiency and effectiveness. We are currently 8 percent below the General Services Administration’s (GSA) median costs for facilities management. We continue to look for cost-reducing efficiencies in our utility usage. Our electrical costs are currently 25 percent below GSA’s median cost. With the pending completion of our perimeter security enhancements and an automated agency wide access control system, all major security enhancements will have been completed. Knowledge Services: As a knowledge-based organization, it is essential for GAO to gather, analyze, disseminate, and archive information. Our Knowledge Services program provides the information assets and services needed to support these efforts. In recent years, we have expanded our use of electronic media for publications and dissemination; enhanced our external Web site, resulting in increased public access to GAO products; and closed our internal print plant and increased the use of external contractors to print GAO products, increasing the efficiency and cost- effectiveness of our printing operation. Due to recent budget constraints, we have curtailed some efforts related to archiving paper records. We currently are implementing an electronic records management system that will facilitate knowledge transfer, as well as document retrieval and archival requirements. Human Capital Operations: In addition, funds will be allocated to Human Capital Operations and support services to cover outplacement assistance, employee health and counseling, position management and classification, administrative support, and transcription and translation services. We appreciate your consideration of our budget request for fiscal year 2006 to support the Congress. GAO is uniquely positioned to help provide the Congress the timely, objective information it needs to discharge its constitutional responsibilities, especially in connection with oversight matters. GAO’s work covers virtually every area in which the federal government is or may become involved anywhere in the world. In the years ahead, GAO’s support will prove even more critical because of the pressures created by our nation’s large and growing long-term fiscal imbalance. This concludes my statement. I would be pleased to answer any questions the Members of the Committee may have. This section contains credit and copyright information for images and graphics in this product, as appropriate, when that information was not listed adjacent to the image or graphic. Page 6: PhotoDisc (money); Eyewire (monitor and medical symbol). Page 9: BrandXPictures (flag); PhotoDisc (calculator and Social Security card). Page 11: BrandXPictures (flag); Digital Vision (teacher); Dynamic Graphics (health care). Page 12: BrandXPictures (flag); Dynamic Graphics (1040 Form); DOD (soldiers). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | We are grateful to the Congress for providing us with the support and resources that have helped us in our quest to be a world-class professional services organization. We are proud of the work we accomplish as we continue to provide our congressional clients with professional, objective, fact-based, non-partisan, non-ideological, fair, balanced, and reliable information in a timely manner regarding how well government programs and policies are working and, when needed, recommendations to make government work better. We believe that investing in GAO produces a sound return and results in substantial benefits to the Congress and the American people. In the years ahead, our support to the Congress will likely prove even more critical because of the pressures created by our nation's current and projected budget deficit and long-term fiscal imbalance. These fiscal pressures will require the Congress to make tough choices regarding what the government should do, how it will do its work, who will help carry out its work in the future, and how government will be financed in the future. We summarized the larger challenges facing the federal government in our recently issued 21st Century Challenges report. In this report, we emphasize the critical need to bring the federal government's programs and policies into line with 21st century realities. Continuing on our current unsustainable fiscal path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our national security. We, therefore, must fundamentally reexamine major spending and tax policies and priorities in an effort to recapture our fiscal flexibility and ensure that our programs and priorities respond to emerging security, social, economic, and environmental changes and challenges in the years ahead. We believe that GAO can be of invaluable assistance in helping the Congress address these challenges. This testimony focuses on our (1) performance and results with the funding you provided us in fiscal year 2004, (2) streamlining and management improvement efforts under way, and (3) budget request for fiscal year 2006 to support the Congress and serve the American people. In summary the funding we received in fiscal year 2004 allowed us to audit and evaluate a number of major topics of concern to the nation and, in some cases, the world. We also continued to raise concerns about the nation's long-term fiscal imbalance, summarized key health care statistics and published a proposed framework for related reforms, and provided staff support for the 9/11 Commission. In fiscal year 2004, we exceeded or equaled our all-time record for six of our seven key performance indicators while continuing to improve our client and employee feedback results. We are especially pleased to report that we documented $44 billion in financial benefits--a return of $95 for every dollar spent, or $13.7 million per employee. In fiscal year 2004, we also recorded 1,197 other benefits that could not be measured in dollar terms including benefits that helped to change laws, to improve services to the public and to promote sound agency and governmentwide management. Also, experts from our staff testified at 217 congressional hearings covering a wide range of important public policy issues during fiscal year 2004. Shortly after David Walker was appointed Comptroller General, he determined that our agency would undertake a transformation effort. Our transformation effort has enabled us to eliminate a management layer, streamline our organization, reduce our overall footprint, and centralize many of our support functions. Currently, over 50 percent of our support staff are contractors, allowing us to devote more of our staff resources to our mission work. We recently surveyed managers of agency support operations and identified additional activities that potentially could be filled through alternative sourcing strategies. In fiscal years 2005 and 2006, we will further assess the feasibility of using alternative sourcing for these activities. I would be pleased to brief you at a later date on our preliminary analyses. In developing our fiscal year 2006 budget, we have taken into consideration the overall federal budget constraints and the committee's desire to lead by example. Accordingly, we are requesting $493.5 million which represents a modest increase of 4 percent over fiscal year 2005. This increase is primarily for mandatory pay costs and price level changes. This budget request will allow us to continue to maximize productivity, operate more effectively and efficiently, and maintain the progress we have made in technology and other areas, but it does not allow us sufficient funding to support a staffing level of 3,269--the staffing level that we requested in previous years. Even as we are tempering our budget request, it needs to be acknowledged that there are increasing demands on GAO's resources. While we have reduced our planned staffing level for fiscal years 2005 and 2006 in order to keep our request modest, we believe that the staffing level we requested in previous years is a more optimal staffing level for GAO and would allow us to better meet the needs of the Congress and provide the return on investment that both the Congress and the American people expect. |
A paid tax return preparer is anyone who is paid to prepare, assist in preparing, or review a taxpayer’s tax return. In this report, we refer to two categories of paid preparers—tax practitioners and unenrolled preparers. CPAs, attorneys, and enrolled agents are tax practitioners. Tax practitioners can practice before IRS; practicing before IRS includes the right to represent a taxpayer before the IRS, to prepare and file documents with IRS for the taxpayer, and to correspond and communicate with IRS. Individuals can become enrolled agents by passing a 3-part examination; IRS waives the examination requirement for people with specific prior work experience at IRS. Department of the Treasury Circular 230, Regulations Governing the Practice of Attorneys, Certified Public Accountants, Enrolled Agents, Enrolled Actuaries, and Appraisers before the Internal Revenue Service, applies to tax practitioners and governs their duties, restrictions, sanctions, and disciplinary proceedings. IRS’s Office of Professional Responsibility (OPR) has responsibility for administering and enforcing Treasury Circular 230. We use the term unenrolled preparer to describe the remainder of the paid preparer population. In most states, anyone can be an unenrolled preparer regardless of education, experience, or other standards. Paid preparers are a critical part of the nation’s tax administration system because of the wide variety of services they offer and their unique relationship with taxpayers. Paid preparers may combine several taxpayer services, including help understanding tax obligations, answering tax law questions, and providing tax forms and publications, return preparation, and electronic filing. IRS regards tax professionals as a critical link between taxpayers and the government. For example, IRS has a section of its Web site dedicated to providing information directly to tax professionals. IRS also sponsors the Nationwide Tax Forums, annual conferences in several cities every year to provide tax education to paid preparers. The Web site of the National Association of Tax Professionals also points out the shared responsibility of paid preparers to represent their clients while respecting the law, listing among its professional standards one that says “Should the client insist upon item being stated on the return incorrectly, the member should withdraw and refuse to prepare the return.” The number of active paid preparers is unknown. In 1999, IRS estimated there were up to 1.2 million paid preparers, but IRS officials acknowledge that the actual number could be significantly higher or lower. The total number of active paid preparers is unknown because only a small portion of all paid preparers—enrolled agents—are licensed directly by IRS to practice before the IRS. As of June 2008, about 43,000 tax preparers were actively enrolled to practice before the IRS. IRS officials said that the number of new enrolled agent applications and the number of people taking the examination have declined in recent years. They noted that these declines followed increases in enrolled agent application and examination fees. Similarly, the number of attorneys and accountants who make tax return preparation a part of their practice is unknown. Millions of tax returns prepared by paid preparers have serious compliance problems, which often leave taxpayers owing or overpaying by hundreds or thousands of dollars. As we have previously reported, IRS’s tax year 2001 NRP data indicate that tax returns prepared by paid preparers had a higher error rate—56 percent—than returns prepared by taxpayers—47 percent. In 2002, we estimated that on as many as 2.2 million tax returns, taxpayers claimed the standard deduction when their potential itemized deductions were greater, and that about half of these taxpayers had returns prepared by another person. In 2005, we reported that many tax returns included claims for one of three available postsecondary education tax preferences that resulted in higher overall tax liability than if one of the other preferences had been taken, and that over half of these returns were prepared by paid preparers. However, the fact that errors were made on a return done by a paid preparer does not necessarily mean the errors were the preparer’s fault; the taxpayer may be to blame. The preparer must depend on the information provided by the taxpayer. On the other hand, some mistakes are clearly the fault of the preparer. In 2006 we reported on the results of an investigation where we identified mistakes in 19 out of 19 visits to paid preparers working in preparer chain offices. Some of the mistakes were significant, either exposing the taxpayers to serious IRS enforcement action or costing taxpayers over $1,500 in overpaid taxes. In 2007, the Department of Justice took action against corporations operating franchises of a major tax preparation chain. The government complaints alleged that the franchisee corporations created and fostered a business environment “in which fraudulent tax return preparation is encouraged and flourishes.” The corporations that owned the franchises agreed to sell the franchises to new owners and to be permanently barred from preparing federal income tax returns. When mistakes or deliberate noncompliance by paid preparers result in taxpayers underreporting their tax liabilities, it adds to the tax gap. The net tax gap is an estimate of the difference between the taxes owed— including individual income, corporate income, employment, estate, and excise taxes—and what was eventually paid for a specific year. IRS most recently estimated the net tax gap to be $290 billion in 2001. In March 2008, we recommended that IRS develop a plan to require a single identification number for paid preparers, including assessing the feasibility of options, their benefits and costs, as well as their usefulness for enforcement and research, on paid preparer behavior. Also, as of July 2008 there were similar bills pending before Congress calling for national paid preparer regulation. Senate Bill 1219 and House of Representatives Bill 5716 would require members of the current community of unenrolled paid preparers to pass an initial qualifying examination and meet continuing annual education requirements. Support for legislation such as this can be found in the National Taxpayer Advocate’s 2002 and 2003 Annual Reports to Congress, which recommended Congress create a designation called a “Federal Tax Return Preparer,” defined as someone other than an attorney, CPA, or enrolled agent, who prepares more than five federal tax returns in a calendar year and satisfies registration, examination, and certification requirements. Only a few Internal Revenue Code provisions apply to all paid preparers and only a small portion of paid preparers—enrolled agents—have any federal registration, testing, or fee requirements. All paid preparers are subject to a few Code provisions and may be penalized if they fail to follow them. For example, the Internal Revenue Code imposes monetary penalties on paid preparers who (1) understate a taxpayer’s liability due to a position that fails to meet the applicable legal standard, (2) fail to provide a copy of the return to the taxpayer, or (3) fail to identify themselves on the returns they prepare. Additionally, for returns that include the Earned Income Credit (EIC), paid preparers must ask specific questions to determine a taxpayer’s eligibility for the credit. Also, all paid preparers who choose to file electronically are subject to IRS Electronic Return Originator rules. Both California and Oregon began to regulate paid preparers in the 1970s. California’s program was first administered by the state’s Department of Consumer Affairs, and legislation transferred oversight responsibility to CTEC in 1997. Oregon’s program was established by the 1973 Oregon Legislative Assembly after representatives of the state’s paid preparer community recommended that the legislature regulate the profession. According to a preparer involved at the time, the Oregon Legislative Assembly was responding to a report that there were many dishonest or incompetent paid preparers working in the state. The main features of California’s paid preparer program are qualifying and continuing education and registration. To become a CRTP, individuals initially register with CTEC by completing a 60-hour qualifying education course, purchasing a $5,000 surety bond, completing an application, and paying a $25 registration fee. CTEC may waive some of the qualifying education requirements for individuals with 2 recent years experience in the preparation of personal income tax returns. In each subsequent year, CRTPs must complete 20 hours of continuing education, ensure their bond remains in full force, submit a renewal application, and pay a $25 renewal fee. As of June 6, 2008, 41,755 paid preparers were registered with CTEC. CPAs, attorneys, enrolled agents, and employees of any of these types of tax practitioners are exempt and not required to register. California does not require prospective CRTPs to pass a criminal background check or to report past criminal convictions or current legal issues. This means that prior questionable or illegal conduct is not known to program administrators. Moreover, CTEC does not have the authority to deny a preparer’s registration application based on known illegal conduct, nor does the California Code include provisions for refusing to renew a CRTP’s registration as long as the CRTP meets the continuing education requirement and pays the annual registration fee. The 60-hour qualifying education requirement is intended to ensure paid preparers have a basic knowledge of federal and California tax laws. According to the CTEC policy manual, the intent of the annual continuing education requirement is to enhance the paid preparer’s skill in tax matters above the basic knowledge they have already acquired. CTEC approves an education provider’s curriculum based on an independent review of one of the prospective provider’s courses at least once every 3 years. People who are not one of the types of exempt tax practitioners who prepare tax returns in California without becoming CRTPs can be fined. Under a Memorandum of Understanding between CTEC and the California Franchise Tax Board (FTB), the FTB is reimbursed by CTEC for providing staff to identify unregistered tax preparers. In 2007, FTB provided one full-time and one part-time employee and CTEC reimbursed FTB $270,000. Persons suspected of illegally preparing tax returns are first issued penalty letters and encouraged to become registered. If they do not register within 90 days, the FTB can levy fines of up to $5,000. An FTB official said that between July 1, 2005, and June 30, 2006, FTB identified 77 individuals as unregistered. Many of these persons were identified by the 2 FTB staff members who visited the Los Angeles and San Francisco Bay areas—where there are large numbers of paid preparer offices—met with paid preparers, and asked to see evidence of registration. Noncompliant paid preparers have also been identified through complaints sent to CTEC and passed along to FTB. Oregon requires paid preparers who are not already licensed by the state as CPAs or attorneys, or working for a CPA, to obtain a state license to prepare tax returns. Enrolled agents—practitioners licensed by Treasury—must also obtain an Oregon license, but they are subject to fewer qualifying requirements than other individuals who are seeking an LTC license. The state board that administers the program—the Oregon Board of Tax Practitioners—issues two levels of paid preparer licenses: the Licensed Tax Preparer (LTP) license and the Licensed Tax Consultant (LTC) license. To become an LTP, a person must have a high school diploma or the equivalent, complete 80 hours of approved qualifying education, pass a state-administered examination with a score of 75 percent or better, and pay an $80 registration fee. To continue as an LTP in following years, individuals must annually renew their license by completing 30 hours of approved continuing education and paying an $80 renewal fee. An LTP in Oregon may only prepare tax returns for Oregon residents under the supervision of an LTC, CPA, or attorney. A person can become an LTC after working as a tax preparer for a minimum of 780 hours during 2 of the prior 5 years, completing a minimum of 15 hours of continuing education within 1 year of submitting an application, and passing a more advanced examination with a score of 75 percent or better. LTPs and LTCs must disclose on their initial license and license renewal applications if they have been convicted of a crime or are under indictment for criminal offenses involving dishonesty, fraud, or deception. According to the Oregon statute, OBTP can consider the circumstances in particular cases and still approve an application when the applicant has disclosed a legal issue. Many applicants do not pass the LTP or LTC examinations. For instance, from March 1, 2006, to February 28, 2007, 54 percent of test takers passed the LTP examination and 30 percent passed the LTC examination. The OBTP updates both examinations yearly. The examinations cover specific Oregon and federal personal income tax laws as well as tax theory and practice. The LTC examination also includes questions on corporation and partnership income as they relate to personal income tax returns. The examination questions pertain to approximately 75 percent federal and 25 percent state law. IRS enrolled agents in Oregon who wish to become LTCs must pass a shorter version of the LTC examination that is limited to Oregon state laws. The intent of Oregon’s education and examination requirement is to ensure paid preparers comprehend the state and federal tax codes. OBTP reports that in March 2008, 3,993 paid preparers held one of these two licenses—1,916 LTPs and 2,077 LTCs. The Oregon statute includes fines for preparing tax returns without a license. Each return prepared can generate a separate fine, so the total penalty for working as an unlicensed preparer can be very large. OBTP also has the authority to assess civil penalties of up to $5,000, or suspend or revoke the license of LTCs and LTPs who engage in fraudulent or illegal conduct, or who violate other provisions of the Oregon statutes or OBTP rules. Additionally, the board may order restitution to consumers harmed by tax preparation fraud. From March 2001 to November 2007, OBTP took disciplinary action 48 times, with fines totaling about $2 million. The largest fine for one individual was in April 2002 for $805,700. Only a fraction of fines are eventually collected however—while about $867,000 in fines were levied from July 2005 through June 2007, about $69,000 in fines and $6,000 in interest was collected during the same period. Persons penalized by the OBTP can appeal these decisions and OBTP has an arrangement with the Oregon Office of Administrative Hearings to provide an administrative law judge to hear these cases. Individuals can also appeal their cases to the Oregon Court of Appeals. Both California and Oregon use their registered or licensed paid preparer lists to contact preparers to remind them about requirements and to inform them about changes to the tax code or other matters they should know about. However, neither state uses their preparer information to track paid preparer accuracy or for enforcement purposes. California does not require CRTPs to include their CTEC registration number on either the state or federal tax returns that they prepare. Oregon requires LTCs and LTPs to include their license number on both types of returns, but officials told us that this requirement is not consistently followed as some licensees incorrectly put down their Preparer Tax Identification Number, Social Security Number, or an employer’s Employer Identification Number. Consequently, neither state has a reliable means to track or analyze returns prepared by registered or licensed paid preparers in their states. Table 1 illustrates some of the highlights of the California and Oregon regulatory programs. In May 2008, Maryland also enacted paid preparer legislation that will require tax preparers to pass an examination, pay a registration fee, and subsequently comply with continuing education requirements. Also, New York, Oklahoma, and Arkansas all have legislation pending that would create tax preparer programs. All three pending bills create an oversight regime, which would include tax preparer registration and education requirements, both initial and continuing. The Oklahoma and Arkansas bills require that preparers pass an examination to register. Arkansas’s pending legislation closely models the Oregon regime, with requirements for both preparers and consultants. New York’s pending legislation is similar to California’s paid preparer program, requiring preparers to maintain surety bonds but having no provision for preparer testing. The enacted Maryland program and the pending legislation in New York and Oklahoma exempt CPAs, attorneys and their employees, and enrolled agents from the requirements. The Arkansas bill would exempt CPAs and attorneys and their employees, and would require enrolled agents to pass a test only on Arkansas tax law issues. Table 2 provides an overview comparison of the California and Oregon requirements with the Maryland requirements and the pending legislation in the other states. IRS officials noted that continued growth in the number of different paid preparer registration or licensing regimes in different states could become a problem if the requirements differ from state to state. The officials described this as primarily a problem for the tax preparation industry in that a variety of regulatory regimes across many different states could make it complicated, for example, for paid preparers to move their practice from one state to another or for a tax preparation chain to move employees or expand their operations. When controlling for other factors likely to affect tax return accuracy, our analysis of IRS data showed that tax year 2001 federal tax returns filed in Oregon were more likely to be accurate than returns in the rest of the country, which is consistent with but not sufficient to prove that Oregon’s regulatory regime improves tax return accuracy. Relative to the rest of the country, Oregon paid preparer returns had a greater likelihood of being accurate and California paid preparer returns were less likely to be accurate. Specifically, we found that the odds that a return filed by an Oregon paid preparer was accurate were about 72 percent higher than the odds for a comparable return filed by a paid preparer in the rest of the country. Conversely, the odds that a paid preparer return in California was accurate were about 22 percent lower than for paid preparer returns in the rest of the country. This indicates that California’s paid preparer regulatory regime may not improve the likelihood that returns are accurate, relative to the rest of the country. Our analysis controlled for factors such as the complexity of tax returns in comparing California and Oregon to the rest of the country. However, our analysis cannot rule out the possibility that factors for which we could not control affected the accuracy of tax returns in either state. To determine the relative likelihood that Oregon and California returns were accurate, we used multivariate logistic regression to compare the odds of return accuracy in these states compared to odds in the rest of the country, controlling for other characteristics that might influence return accuracy. To make these accuracy comparisons, we used data from IRS’s NRP, which assessed the accuracy of individual tax returns from tax year 2001. We defined a return as accurate if it required less than $100 absolute value in changes. As an illustration of the differences among paid preparer returns in California and Oregon, we computed the probability of accuracy for a medium complexity, form 1040, U.S. Individual Income Tax Return, for a taxpayer with income over $100,000. While a return with these characteristics prepared by a paid preparer in Oregon would have a 74 percent probability of being accurate, a similar return prepared by a paid preparer in California would have a 55 percent probability of being accurate. In addition to having a higher likelihood of accuracy than the rest of the country, on the average Oregon 2001 federal tax return—regardless of whether it was self prepared or from a paid preparer—auditors identified a smaller increase in taxes owed. In Oregon, the average return required approximately $250 less of a change in tax liability than the average return in the rest of the country. Our $250 estimate is conservative in that it does not incorporate the limited number of cases with relatively large liability changes. With about 1.56 million individual tax filers in Oregon in 2001, this translates into over $390 million more in income taxes paid in Oregon than would have been paid if Oregon returns were prepared at the level of accuracy seen on similar returns in the rest of the country. The average tax liability change in California was higher than the average in the rest of the country by approximately $90. Although the differences we observed in the states’ regulatory programs and in how likely California and Oregon returns were to be accurate compared to the rest of the country are consistent with the Oregon regime leading to some improved federal tax return accuracy, the analysis cannot rule out that the regime did not have such an effect. We could not control for other factors that may influence accuracy, such as whether Oregon paid preparers were more likely to be attorneys or CPAs than preparers elsewhere in the country. Also, data are not available on return accuracy prior to the existence of each state’s program, so we cannot compare the before and after effects of the regimes. Before and after data might have shown, for instance, whether the California regime leads to improved tax return accuracy compared to what it otherwise would have been even though California’s returns in 2001 were less accurate, on average, than returns in the rest of the country. Also, we considered the accuracy of tax returns in other states and found that some states without paid preparer laws had more accurate tax returns than the national average, after controlling for the factors in our model. This indicates that regulation over paid preparers alone does not explain the differences that we found. Further, to the extent that the Oregon regime does improve tax return accuracy, our methodology does not identify whether any part of the regime is most important to that result. Our methodology only takes into account the entire regimes as implemented in Oregon and California. Both California and Oregon support their programs almost entirely through fees, with state program costs averaging about $29 and $123 per year, respectively, per registered paid preparer. In addition to the fees charged to paid preparers, the preparers or their employers bear other costs, such as those associated with taking courses on tax law and return preparation. Program administrators and preparer community representatives in both states said that there are intangible benefits from their regulatory regimes, although there are no studies quantifying outcomes in either place. The California and Oregon paid preparer registration programs include differing design features, such as on testing applicants and how much enforcement is deemed desirable, that show, not surprisingly, that more extensive programs cost more. California’s paid preparer program is more limited in scope than Oregon’s, and has lower direct administration costs per registered preparer. Because neither state provides funding for the programs above the fees collected, the entire cost of both programs are borne directly or indirectly by the regulated paid preparer communities. As noted previously, California’s program primarily requires unenrolled preparers to register with the state and meet minimum education requirements. The total direct budgeted cost of the California program was about $1.2 million in fiscal year 2007, with most of the funding coming from the $25 registration fees that CRTPs must pay, with additional funds coming from late registration fees and other income such as fees paid by education providers that apply to be approved as CTEC education providers. CTEC’s total budget in 2007 was $1.2 million and CTEC reported 41,755 CRTPs in June 2008, so the cost per CRTP was about $29. According to CTEC officials, no funds from state tax revenues are used to pay for administering or enforcing California’s paid preparer laws. Like California, Oregon also registers preparers and seeks to ensure that paid preparers meet minimum education requirements, but it also tests prospective LTPs and LTCs, adding to the administration cost of the Oregon program. In 2008, prospective LTPs pay $50 and prospective LTCs pay $85 to take the examinations. Also as of 2008, LTPs pay $80 and LTCs pay $95 to obtain their initial license and in each subsequent year to renew their license. The registration fee for a new LTC who had been an LTP is $65. OBTP also collects fines and penalties from both unlicensed tax return preparers and licensed paid preparers who violate Oregon laws— averaging about $38,000 per year in the 2005 through 2007 period. OBTP’s administrative expenses amounted to about $490,000 in 2007—divided by the 3,993 LTCs and LTPs OBTP reported in March 2008, this is about $123 per licensee. According to OBTP officials, OBTPs operating funds come from the fees and fines described above and none come from the state’s general revenues. Administrative functions of CTEC and OBTP include communicating with paid preparers and the public at large about their regulations, informing the paid preparer community about tax law and processing changes, evaluating education providers, recordkeeping related to registration and licensing, maintaining a Web site that taxpayers can use to find a paid preparer or check that a particular paid preparer is properly registered or licensed, and working with the state legislature and the rest of the state government. Some of the difference in the administrative cost per registered or licensed preparer between the two states may be attributed to economies of scale in the registration of paid preparers that California has relative to Oregon. While California’s direct operating budget is about twice the size of Oregon’s, the number of preparers that it registers is more than 10 times greater. Enforcement-related expenses take up a share of the CTEC and OBTP budgets. In California, CTEC paid the FTB $270,000 in fiscal year 2007 to conduct enforcement targeted at identifying unregistered preparers and either bringing them into compliance or fining them. CTEC is not involved in imposing fines on unregistered preparers and has no means of taking enforcement action against a CRTP for misconduct, and it has never incurred litigation expenses associated with someone appealing a CTEC decision. In Oregon, the OBTP has a full-time investigator on its staff and directly imposes fines on both licensed and unlicensed paid preparers for misconduct. As discussed previously, these fines can be appealed, so OBTP arranges with the Oregon Office of Administrative Hearings for an administrative law judge to hear cases, and reimburses the Oregon Attorney General’s Office for counsel to handle legal aspects of disputed cases. In 2007, OBTP expenses for its investigator and costs related to litigation were about $93,000. The regulatory programs in the two states impose additional costs beyond the direct administration expenses found in the CTEC and OBTP budgets. In both states, prospective paid preparers must meet qualifying education requirements and the financial and time costs of obtaining this education are directly borne by either the individual or his or her employer. We contacted frequently used education providers in both states and found costs were typically in the $200 to $300 range, although one was $614. According to paid preparers we spoke to, the cost of obtaining continuing education was sometimes fairly low, especially when continuing education was obtained through participation in professional associations. In some associations, monthly meetings usually include a presentation that qualifies for continuing education credit. Other preparers, however, may choose to travel to conferences or training sessions, such as an IRS Nationwide Tax Forum, to obtain their continuing education over just a few days. The registration fee for the IRS forums is fairly low—$179 for early registration in 2008. Out-of-town travel, when necessary, adds to the cost of obtaining required continuing education. Continuing education can also be obtained from state-approved education providers in both classroom settings and over the Internet. Because results for the Oregon regime are consistent with some positive effect on federal tax return accuracy, the cost of that regime is of particular interest. We conservatively estimated the total costs associated with Oregon’s regulation to be about $6 million in 2007. This estimate includes the regime’s direct administrative costs as well as an estimate of the cost of licensees obtaining qualifying and continuing education from education providers, the value of the time they spend in those classes and studying outside of class, and the same education-related costs for all unsuccessful test takers. This estimate is conservative because it counts preparer education time and expense for all licensees, including enrolled agents, who have continuing education requirements under that program, and employees of tax preparation chains that require similar education for all of their preparers. Appendix I describes how we made our estimate. IRS has developed rough measures of return on investment in terms of tax revenue that it assesses from uncovering noncompliance. Generally, IRS cites an average return on investment for enforcement of 4:1, that is, IRS estimates that it collects $4 in revenue for every $1 of funding. For the Oregon paid preparer regulatory regime to be considered a reasonably cost-effective tax administration policy by this standard, it would have to account for only a small share of the $390 million in higher federal tax revenue we estimated came in from Oregon compared to the rest of the country. It is important to note that the 4:1 IRS average return is based on administrative spending and such expenses are less than 10 percent of our approximately $6 million annual total cost estimate for the Oregon program. Regulation of preparers can also have the effect of increasing the price of tax preparation services by reducing the supply of paid preparers. A California tax preparer association representative said that the costs to obtain and maintain CRTP status are fairly low and likely do not have much of an impact on prices consumers pay, and that the requirements to become a paid preparer are not so great that the number of paid preparers in the state is being held lower than it would be without any regulation. In Oregon, however, direct costs to become a paid preparer and to maintain licensed status are somewhat higher. Potentially more important, however, is the requirement that LTPs only work in offices supervised by an LTC, attorney, or CPA, and that LTCs may not supervise more than two offices. This means that there can be a substantial bar to the opening of a new tax preparation business if the owner cannot find and recruit an LTC. We were told by a representative of a tax preparation chain that he had experienced difficulty in opening a new rural office because he could not find an LTC to supervise LTPs. However, since there are somewhat more LTCs in Oregon than LTPs, such problems may be limited. Data that could be used to analyze prices charged by paid preparers in California or Oregon, or to compare prices charged in those states with the rest of the country, are not available. NRP data, however, provide a related point of comparison on the use of paid preparers. NRP data show that taxpayers in Oregon are somewhat less likely to use a paid preparer than taxpayers in the rest of the country and even less likely to use paid preparers than taxpayers in California. NRP data show that about 58 percent of individual taxpayers used paid preparers nationally, while only 49 percent of Oregon taxpayers did so. About 64 percent of California tax returns were prepared by paid preparers. It is possible that the Oregon regulatory regime has had the effect of reducing the supply of paid preparers, leading to an increase in the price charged for the service. Program administrators and preparer community representatives in both California and Oregon described their programs as having benefits that outweigh their costs. Officials in both states also said they believe that paid prepared tax returns are more accurate due to their paid preparer regulatory regimes. However, neither California nor Oregon program administrators have analyzed tax returns to see if this is the case. Representatives also noted that registration facilitates communication with paid preparers that are registered or licensed, so notifying them about, for example, recent changes in tax rules or forms, can be done fairly easily. Program administrators and paid preparer community representatives in California and Oregon also told us education requirements likely reduce the number of incompetent paid preparers and have led to a more professional tax preparation industry. California and Oregon program administrators also said that consumers benefit from the ability to go online and verify whether a paid preparer is registered or licensed. Both state programs also give taxpayers the ability to seek restitution when wronged by a paid preparer. A benefit of the Oregon program is that prospective preparers who cannot pass the state examination are not allowed to prepare tax returns in that state. As noted previously, the Oregon LTP examination has only a 54 percent passing rate. This means that many people who want to become paid preparers but lack the knowledge and skills necessary to pass the Oregon exam are not legally preparing tax returns. People in every other state with a similar desire to become a paid preparer—and a similar lack of skill—are presumably preparing tax returns. Occupational licensing of other professions has been shown to have costs and benefits to the consumer. As with other markets for services, licensing paid preparers might be expected to have several potential effects depending on how licensing requirements are designed. Depending on the level of education or expertise required to obtain a license, some preparers who become licensed may acquire additional knowledge, which helps them better prepare returns or expand their expertise to additional types of returns. In Oregon, officials said that they believe unlicensed tax preparers cost the consumer money when they prepare incorrect or inaccurate tax returns. Occupational licensing of other professions suggests that taxpayers may be willing to pay more to have their returns prepared by registered or licensed paid preparers if the regulatory requirements (i.e., education requirements) provide greater assurance of a higher quality prepared return. Consumers who continue to use these paid preparers may benefit as a result and some taxpayers who previously self prepared their own returns may switch to a licensed or registered preparer because of additional assurance of quality service. On the other hand, if the licensing requirements cause some preparers to no longer offer services, prices may rise and some taxpayers may switch to self preparation. The California and Oregon paid preparer regulation programs provide reference points for national policymakers when considering a national paid preparer regulatory regime. In both cases, program costs are driven by the scope of the program. As with the differences we identified in California and Oregon, a more extensive national program will likely cost more to administer than a less extensive one. An additional point of comparison for policymakers considering a potential national paid preparer program is IRS’s enrolled agent program. Enrolled agents are paid preparers who are permitted to represent their clients in matters before IRS. Enrolled agents have to either pass a 3-part examination covering individual income taxes, business taxes and representation, and practices and procedures, or have specific IRS experience. During the period May 2007 through April 2008, the overall passing rate for the three parts of the examination was 48 percent. Prospective enrolled agents also have to meet continuing education requirements and pay a $125 registration fee every 3 years. One area in which the enrolled agent program parallels the two state programs we studied is that the examination is handled through a contract that is of no direct cost to the government. A private company developed the tests and administers them at sites around the country and it is compensated entirely through fees of about $100 that test takers pay to take each part of the 3-part examination. Most of the test taking fee is retained by the contractor, but $11 is remitted to IRS. Applicants are also required to allow IRS to conduct a background check. IRS officials in OPR said that the more a national program is expected to accomplish, the more expensive it will likely be to design, implement, and administer. Enforcement is a key consideration, as even the fairly modest enforcement efforts in the two states we reviewed took up 19 percent of total administrative costs in Oregon and 23 percent in California. IRS officials said that more extensive enforcement nationwide could be very costly. IRS officials said they have not developed specific costs for a national regime, in part because they are uncertain which of the many potential elements the program would include. The California and Oregon regulatory regimes point to the feasibility of a nationwide regulatory regime involving paid preparer education, registration, and, as in Oregon’s case, testing. Both states have enacted registration and other requirements while funding the administration of their programs through relatively modest fees paid by paid preparers, similar to the way that IRS sees to the testing of enrolled agents. A key benefit from the Oregon approach is the apparent rigor of its qualifying examinations. Just under half of the people who take the Oregon LTP examination fail to pass. These people are not legally preparing tax returns in Oregon today, at least not until they are able to pass the examination. Paid preparers with an equivalent lack of demonstrated ability may well be working as paid preparers in other states. Available data do not conclusively support or refute the idea that adopting some or all of the California or Oregon program elements at the national level would improve the accuracy of paid prepared returns or reduce the tax gap. However, the more stringent requirements of the Oregon regime along with our modeling results suggest that an Oregon-style approach to paid preparer regulation may be beneficial. The higher level of accuracy found on Oregon returns meant $390 million more in income taxes paid in Oregon than would have been paid if Oregon returns were as accurate as returns everywhere else. The cost of the Oregon program is quite small in comparison, about $490,000 per year in administrative expenses and an estimated total of about $6 million after including the time and expense associated with paid preparers meeting their education and testing requirements. If only a small share of the increased revenue is attributable to the Oregon regulatory regime, it would compare favorably to IRS’s overall efforts to increase reporting accuracy. With over half of individual taxpayers using paid preparers, it may be possible to make meaningful progress towards narrowing the tax gap by requiring all paid preparers to demonstrate competence before being allowed to prepare other people’s tax returns. However, because the extent, if any, to which the Oregon regulatory regime improves federal tax return accuracy, is uncertain, if a similar regulatory regime is adopted at the federal level, its effect on tax return accuracy should be assessed. Because IRS has resumed periodic studies of tax return accuracy, such a study could compare accuracy of returns before and after implementation of a federal regime. If Congress judges that the Oregon paid preparer regulatory regime is likely to account for at least a modest portion of the higher accuracy of Oregon federal tax returns and could be implemented nationwide at a favorable cost compared to the potential benefits of improved accuracy, it should consider adopting a similar regime nationwide. In light of the uncertainty about the extent to which Oregon’s regime improves tax return accuracy, if Congress enacts national paid preparer legislation, it should also require IRS to evaluate its effectiveness. In a letter commenting on a draft of this report dated August 1, 2008, the Commissioner of Internal Revenue noted the important role that paid preparers play in supporting a fair, efficient, and effective system of tax administration. His letter also notes IRS’s strategy of working with paid preparers and curbing abuses by unscrupulous preparers. IRS also provided technical comments which we incorporated. The Commissioner’s letter is included in appendix II. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies of this report to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. This report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or brostekm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to answer the following questions: (1) How do IRS, California, Oregon, and other states regulate paid preparers? (2) Using available IRS data, how does the accuracy of federal tax returns in California and Oregon compare to that of returns in the rest of the country, after accounting for other factors that might influence accuracy? (3) What are the state-level costs and benefits of the paid preparer programs in California and Oregon and what insights do they provide for possible benefits if Congress were to enact national paid preparer registration or licensing requirements? To answer the first and third objectives we conducted a literature review of both the California and Oregon paid preparer programs, including a review of applicable laws and budget documents. We also interviewed state program administrators from the California Tax Education Council and the Oregon Board of Tax Practitioners (OBTP); officials from the California Franchise Tax Board and the Oregon Department of Revenue; and leaders in each state’s paid preparer community, and reviewed documents provided to us by them. At the federal level, we reviewed appropriate legislation concerning the regulation of paid preparers, interviewed IRS officials, primarily from the Office of Professional Responsibility, and reviewed documents related to the enrolled agent program. We also interviewed and obtained data from an official from Prometric, the company IRS contracted with to develop and administer the enrolled agent examinations. We interviewed the National Taxpayer Advocate and members of her staff concerning her prior recommendations to regulate paid preparers. We also met with a representative from the National Association of Enrolled Agents to understand their perspective on a more expansive national regulatory regime. Finally, we conducted a literature review of professional occupational regulation to understand the potential effects of occupational regulation on the paid preparer profession. In identifying nonfederal paid preparer regulation programs, we limited our review to state governments and requirements concerning qualification, registration, or licensing of paid preparers and we did not consider possible county or city regulations, or laws dealing with paid tax return preparer conduct. For the discussion of costs and benefits from the Oregon program in the third objective, we also used information from the OBTP about program costs and the number of new and returning licensees in 2007. We obtained information from education providers about the fees that they charge for basic and continuing education. We also used the U.S. Bureau of Labor Statistics national average hourly wage for paid tax return preparers— $16.78 in 2007—the value of the time spent obtaining the education. Using this information, we developed an estimate of the total cost of the Oregon program. In considering costs to include, we included higher-end estimates where possible to ensure that our estimate of the total cost of the Oregon program was conservative. For example, we did not consider the fact that many Oregon licensees are employed by a national tax preparation chain that requires its paid preparers to receive initial and continuing education, so they would be obtaining that education regardless of the Oregon laws. To answer the second objective, we analyzed data from IRS’s National Research Program (NRP). The NRP contains detailed tax and audit data from approximately 47,000 randomly selected tax year 2001 returns, and includes extensive compliance data including line-by-line estimates of accuracy. Unlike other compliance-related data sets, NRP data are generalizable to the population of individual taxpayers throughout the U.S. While NRP was not designed for specific state-level analysis, in conjunction with IRS’s NRP officials, we agreed on the types of analysis that the data would support and which variables could be used. Our analysis comprised four main steps, each of which is explained in more detail below. We first examined the odds that returns from different locations and using different preparation types were accurate. Next, we considered the relative likelihood that a return was accurate, prior to controlling for other factors. Additionally, recognizing that Oregon and California differ from the rest of the country in terms of factors potentially related to a return’s accuracy, we developed multivariate statistical models to assess whether returns from these states were more or less likely than returns from other states to require liability changes of $100 or more in absolute value after controlling for other factors. We also assessed differences in the accuracy of self-prepared tax returns. Finally, we estimated potential cost savings using multivariate regression analysis to assess the size of average tax liability changes for Oregon or California returns relative to the returns in the rest of the United States, controlling for other factors. In creating our statistical models, we examined a variety of variables on the basis of previous research, our reports, and recommendations from NRP personnel. Our final model included measures of the complexity of the return, including whether it was for a sole proprietor or claimed the Earned Income Credit (EIC). We also included the examination class of the return, taxpayer adjusted gross income in quartiles, whether the return was e-filed, filing status, and a proxy for a state’s aggregate level of English proficiency. All models were calculated using sampling weights and robust estimation to account for differential variation among returns in distinct sampling strata. Table 3 illustrates differences in likelihood that returns from different locations and using different preparation types were accurate. Column A of table 3 shows that, prior to controlling for other factors, 54 percent of California returns and 71 percent of Oregon returns were accurate compared to 64 percent of returns in the rest of the United States. On average, 58 percent of paid preparer returns were accurate, compared to 70 percent of self-prepared returns. The lower half of table 3 illustrates the combined effect of location and preparation status. Prior to controlling for other factors, 49 percent of California paid preparer returns and 67 percent of Oregon paid preparer returns were accurate, compared to 59 percent of paid preparer returns in the rest of the country. Similarly, without controlling for other factors, 63 percent of California self-prepared returns and 75 percent of Oregon self-prepared returns were accurate, compared to 71 percent of self-prepared returns in the rest of the country. The odds within each category, shown in column C, compare the proportion of returns that were accurate to the proportion of returns that were not accurate. For the next step, we used odds ratios to compare the relative likelihood that returns from different locations or of different preparation types were accurate. The unadjusted odds ratio in column D compares the odds of return accuracy in each specific subgroup to a reference group, prior to controlling for other factors. An odds ratio of 1 illustrates that on average, returns for the two groups have the same odds of being accurate, while odds ratios above 1 indicate a higher likelihood of accuracy and odds ratios below 1 indicate a lower likelihood of accuracy. Column D of table 3 illustrates that, prior to controlling for other factors, California returns on average had lower odds of accuracy than returns in the rest of the country, by a factor of .66 (34 percent lower). Conversely, Oregon returns on average had higher odds of accuracy than the rest of the country, by a factor of 1.37 (37 percent), before we account for other factors that might influence accuracy. This pattern holds when we compare returns using different preparation methods to similarly prepared returns. For example, California paid preparer returns have odds of accuracy approximately 33 percent lower than paid preparer returns in the rest of the country, and Oregon paid preparer returns have odds that are 41 percent higher than similarly prepared returns in the rest of the country, before controlling for other factors. These unadjusted odds do not control for other factors that might differentiate between returns in Oregon and California compared to those in the rest of the country. However, descriptive data reveal that the characteristics of returns filed in California and Oregon differ from the characteristics of returns filed in the U.S. as a whole. For example, a greater proportion of Oregon and California residents file sole proprietor returns than in the U.S., on average. To control for potential differences that might influence the likelihood of filing an accurate return, we used multivariate logistic regression. These models enabled us to compare the adjusted odds of accuracy for returns from Oregon or California with returns in the rest of the country, holding constant the effect of other factors that could affect accuracy. Column E in the upper half of table 3 shows that the odds of accuracy for an average Oregon return were still higher when compared to the rest of the country, and the odds of accuracy for a California return were still lower, after controlling for other factors. Additionally, paid preparer returns, on average, had lower odds of accuracy than self-prepared returns, controlling for other factors including location. As we note previously, not all mistakes on paid prepared tax returns are the fault of the paid preparer. The results for all returns in the upper half of table 3 treat location and preparation type as distinct factors, without considering potential interaction between location and preparation type. To ensure that these estimates did not mask compliance differences between paid preparer and self-prepared returns and to assess the potential impact of regulation on the population directly affected by the regime (paid preparers), we also examined self-prepared and paid preparer returns separately (see the lower half of table 3). These models reveal pronounced effects among paid preparers, after controlling for other factors. Among paid preparer returns, Oregon returns had odds of accuracy 72 percent higher, and California returns had odds of accuracy 22 percent lower, than comparable paid preparer returns in the rest of the country. While self-prepared returns in California had lower odds of accuracy than self-prepared returns in the rest of the country, and Oregon returns had higher odds of accuracy after controlling for other factors, these results were not statistically significant at the 95 percent level. Our estimates of the impact of location on the likelihood that a return was accurate had fairly wide confidence intervals. One reason for this is due to our inability to incorporate the full range of individual or state-level factors that might influence the likelihood of compliance, such as whether a paid prepared return was prepared by an attorney or CPA. Additionally, the NRP sample was designed for purposes other than to compare states, which resulted in wider confidence bounds than would a sample designed specifically for state-level estimates. Our analyses identified several factors other than location that influenced the likelihood that a return would require less than $100 in liability changes, both among returns in general and the subpopulation of paid preparer returns. For example, the odds that a return claiming the EIC was accurate were less than half those of returns that did not claim the EIC in all models. Similarly, sole proprietor returns (those individual returns that had an attached Schedule C, Profit or Loss from Business) had lower odds of being accurate than other returns. Additionally, returns with a filing status of “married, filing separately” were significantly less likely to be accurate than returns in any other filing status. Overall, 1040 forms with total positive incomes of less than $100,000 had higher odds of accuracy compared to form 1040 returns with total positive income of $100,000 or above. Conversely, among forms with total positive income of $100,000, forms 1040F, Profit or Loss from Farming, and 1040C, U.S. Departing Alien Income Tax Return, were less likely to be accurate. In general, e- filed returns had slightly lower odds of accuracy than paper returns. In addition to our main logistic regression model, we conducted a series of alternative analyses to examine the impact of location and paid preparer status with additional control factors and alternative dependent variables, and found results generally consistent with the models presented in table 3. These included several models with and without various aggregate state factors (such as per capita income and whether a state had an income tax), with alternative measures of complexity (including one based on the number of schedules filed), and with a dummy variable for returns that were software generated but not e-filed. Finally, we examined alternative dependent variables, including tax liability changes prior to EIC and additional child credits, and the net sum of dollar values of line item adjustments for each return. These additional analyses give us confidence that our results are robust to a variety of model specifications and different definitions of accuracy. To identify potential cost savings from an Oregon-style regulatory regime, we used multivariate linear analysis to assess the size of average tax liability changes among all returns, controlling for other factors. We conducted diagnostic analysis to identify and exclude outliers and potentially high-leverage cases—individual cases that have the potential to disproportionately affect our estimate when compared to other cases. Our estimate of savings is thus conservative when compared to an analysis that includes all cases, as it does not incorporate the savings generated by a limited number of cases with relatively large liability changes. After controlling for the other factors described, we found that the average return in Oregon required significantly lower changes in tax liability than returns in California or the rest of the country. The average Oregon return required tax liability increases that were approximately $250 lower than comparable returns in the rest of the country. In contrast, the average California return required tax liability increases that were approximately $90 higher than returns in the rest of the country, controlling for other characteristics. We conducted this performance audit from September 2007 through July 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact person named above, David Lewis, Assistant Director; Crystal Bernard; Amy Bowser; James Cook; John Mingus; Ed Nannenhorn; Karen O’Conor; and Anna Maria Ortiz made key contributions to this report. | Millions of taxpayers use paid tax return preparers and many of these paid preparers are not subject to any qualification requirements. Paid preparers in California and Oregon are exceptions in that these states have set paid preparer qualification standards. Additionally, two bills before Congress would require national paid preparer regulations. To help Congress better understand the potential costs and revenue effects of regulating paid preparers, GAO was asked to study (1) how IRS, California, Oregon, and other states regulate paid preparers, (2) how the accuracy of federal tax returns from California and Oregon compare to other returns, and (3) state-level costs and benefits of the California and Oregon programs and insights they provide for a possible national program. GAO analyzed IRS research data on tax return accuracy; interviewed IRS officials, state administrators, and preparer community representatives; and reviewed relevant documents. No federal registration, education, or testing requirements apply to all paid preparers before they can prepare tax returns. California and Oregon have requirements that preparers must meet before preparing returns in those states. California paid preparers who are not attorneys, certified public accountants, enrolled agents (or employed by one of these types of tax practitioners) must complete an education requirement, obtain a bond, pay a fee, and register. In following years, they must complete continuing education requirements, and renew their registration. Oregon has similar, but more stringent requirements. Oregon has a two-tiered licensing system, with an education requirement and examination for Licensed Tax Preparers and work experience and a second examination for Licensed Tax Consultants. Oregon exempts certified public accountants and their employees, as well as attorneys, from these requirements. Oregon requires enrolled agents to take a shorter version of the consultant examination. Fifty-four percent of Oregon applicants passed the state's basic examination. Recently, Maryland enacted legislation to regulate paid preparers and at least three other states have similar pending legislation. According to GAO's analysis of the Internal Revenue Service's (IRS) tax year 2001 National Research Program data, Oregon returns were more likely to be accurate while California returns were less likely to be accurate compared to the rest of the country after controlling for other factors likely to affect accuracy. In dollar terms, the average Oregon return required approximately $250 less of a change in tax liability than the average return in the rest of the country. For Oregon's 1.56 million individual tax filers, this equates to over $390 million more in federal income taxes paid in Oregon than would have been paid if the returns were as accurate as similar returns in the rest of the country. These results are consistent with, but do not prove, that Oregon's regulations lead to some increased tax return accuracy. GAO's analysis could not account for all factors that might affect the accuracy of these tax returns. Because some states without preparer regulation also had tax returns that, on average, were more accurate than the national average, some portion of the increased accuracy of Oregon returns likely is due to other factors. The California and Oregon programs' costs varied with differences in the programs' scope. Both programs' administrative costs are funded primarily from program fees. California's costs were about $29 per preparer and Oregon's about $123. GAO estimates that the total annual cost of the ongoing Oregon program, including state costs and the cost to preparers for their time and expense in acquiring required education, likely is about $6 million. Officials in both states believe program benefits like reducing the number of incompetent preparers outweigh costs, although neither state had data on benefits. IRS officials said that a national program's costs likely would depend on the program's objectives and features. |
Congress passed the Occupational Safety and Health (OSH) Act in 1970 to ensure safe and healthy working conditions for every worker in the nation. OSHA has responsibility for enforcing the provisions of the act, including overseeing most worksites, with the exception of some small employers in low-hazard industries and small farming operations. OSHA has direct enforcement responsibility for about half the states; the remainder have been granted authority for their own enforcement. At present, 22 states have been approved by OSHA to operate their own programs covering all worksites; 4 are approved for covering public sector employee worksites only; and OSHA directly oversees all worksites in the remaining states. OSHA uses two approaches to ensure compliance with federal safety and health laws and regulations—enforcement and voluntary compliance. Enforcement, which represents the preponderance of agency activity, is carried out primarily by using compliance officers to inspect employer worksites. Worksites and employers whose conditions fail to meet federal safety and health standards face sanctions, such as paying penalties for violations of health and safety standards. In this enforcement capacity, OSHA targets employers for inspection using injury and illness rates for industries and specific worksites. For example, it has targeted the construction industry for inspections because of high injury and illness rates. OSHA also conducts inspections when employers report fatalities or serious injuries and when workers file complaints about serious safety and health hazards. The voluntary compliance approach, in contrast, invites employers to collaborate with the agency and uses a variety of incentives to encourage them to reduce hazards and institute practices that will foster safer and healthier working conditions. Such incentives include free consultations, exemption from routine inspections, and recognition for exemplary safety and health systems. To participate in voluntary compliance programs, employers must also meet certain requirements, which often include the adoption of some form of safety and health management program—a program that takes a systems approach to preventing and controlling workplace hazards. OSHA has four basic requirements for a safety and health management program: (1) Management Leadership and Employee Involvement—Top-level management must be committed to carrying out written comprehensive safety and health programs. Employees must be actively involved in the execution of the program. (2) Worksite Analysis—Employers must have a thorough understanding of all hazardous situations to which employees may be exposed, as well as the ability to recognize and correct these hazards. (3) Hazard Prevention and Controls—The program must have clear procedures for preventing and controlling hazards identified through worksite analysis, such as a hazard tracking system and a written system for monitoring and maintaining workplace equipment. (4) Safety and Health Training—Training is necessary to reinforce and complement management’s commitment to safety and health and to ensure that all employees understand how to avoid exposure to hazards. To keep pace with the increasing demands on OSHA staff to help administer and promote voluntary compliance programs, in 2001 OSHA created the new position of “compliance assistance specialist.” According to OSHA officials, funding for this position was authorized in fiscal year 2002. Compliance assistance specialists provide general information about OSHA standards and promote voluntary compliance programs, as well as OSHA’s compliance assistance resources, such as training and Web site resources. They also respond to requests for help from a variety of groups and participate in numerous seminars, workshops, and speaking events. Most specialists are former OSHA compliance officers who conducted inspections of employers’ worksites. In their new positions, the specialists are not involved in OSHA’s enforcement activities. There is one Compliance Assistant Specialist position in each OSHA area office in states under federal jurisdiction, with a total of 65 in fiscal year 2003. OSHA’s strategic management plan identifies particular safety and health problems and industries on which to focus the agency’s efforts. In its current 5-year plan for years 2003 through 2008, one of the agency’s three goals is to promote a safety and health culture through compliance assistance, cooperative programs, and strong leadership. This goal includes increasing the number of participants in voluntary compliance programs and improving the programs’ effectiveness. Another goal is to reduce occupational hazards by, for example, reducing the rate of workplace injuries and illnesses by 5 percent annually. OSHA’s third goal focuses on strengthening the agency’s capabilities and infrastructure, including improving the agency’s access to accurate, timely data, and enhancing its measures for assessing the effectiveness of its programs. OSHA’s voluntary compliance strategies—four programs plus compliance assistance activities such as education and outreach—have expanded the agency’s reach to a growing number of employers. The agency’s four programs reach a range of employers and use a mix of strategies. They target both exemplary worksites and hazardous ones, and they influence employers directly by implementing safety and health programs and indirectly through collaboration with trade and professional associations. Some programs offer employers incentives to participate, such as a reduced chance of on-site inspection or special recognition for safety and health programs. Two of the programs were officially introduced in the last decade, adding to the number of participants engaged in voluntary compliance. OSHA plans to dramatically increase the number of employers and organizations participating in voluntary compliance programs. However, OSHA officials expressed concerns that such plans for expansion could tax the agency’s limited resources. OSHA’s voluntary compliance programs have been implemented incrementally to reach different employers and worksites in various ways. They represent a mix of strategies to help improve workplace conditions (see table 1). In addition to these formal programs, OSHA conducts other compliance assistance activities, such as outreach and training activities, to aid employers in complying with OSHA standards and to educate employers on what constitutes a safe and healthy work environment. The State Consultation Program, begun in 1975, operates in every state. Its primary focus is to help small businesses employed in high-hazard industries comply with OSHA standards and address their methods for dealing with worksite safety. The agency funds all state governments to carry out the program. In fiscal year 2003, OSHA provided $53 million to state governments. States provide free consultation visits at employers’ requests to identify safety and health hazards and discuss techniques for their abatement. In fiscal year 2003, state agents conducted about 28,900 consultation visits. The names of employers receiving consultation visits are kept confidential and separate from OSHA enforcement officials. Depending on an employer’s request, a state consultant may conduct a full safety and health hazard assessment of all working conditions, equipment, and processes at the worksite, or he or she may focus solely on one particular hazard or work process. Employers receive a detailed written report of the consultation findings and agree upon a time frame for eliminating the hazards. Small employers receiving consultation visits may qualify for recognition in the Safety and Health Achievement Recognition Program (SHARP), which exempts them from general, scheduled inspections for 1 or 2 years as models for good safety and health practices. Participants in SHARP must have safety and health programs, which are management programs, in place to prevent and control occupational hazards. In fiscal year 2003, there were 699 SHARP worksites in both federal OSHA states and state- plan states. Although SHARP worksites are exempt from scheduled inspections, they are still subject to inspections resulting from employee complaints and other serious safety and health problems, such as fatalities. The Voluntary Protection Programs, established in 1982, are designed to recognize single worksites with exemplary safety and health programs. As of September 30, 2003, there were a total of 1,024 VPP worksites in both federal OSHA and state-plan states. The manufacturing and chemical industries comprise 21 percent and 20 percent of these recognized worksites, respectively (see fig. 2). The majority of VPP worksites in federal OSHA states have more than 200 employees (see fig. 3). While the VPP does not specifically target large businesses, they tend to be the businesses that attain VPP status. According to an OSHA official, this trend is due to the fact that large businesses tend to have staff and expertise available for a comprehensive safety and health program. To participate in VPP, employers must have worksites that exceed OSHA standards and they must commit to a process of continual improvement. Employers achieving all VPP requirements are designated as Star VPP worksites, which signifies the highest level of workplace safety and health. As of September 30, 2003, 92 percent of all VPP worksites in federal OSHA states have Star designation. To be eligible for this exemplary status, employers must meet a number of specific requirements for their worksite: (1) worksite injury and illness rates must be below the average rate for their industry sector for at least 1 of the 3 most recent years; (2) a safety and health program must have been implemented and maintained for at least 1 year; and (3) worksites must undergo and pass a comprehensive review by OSHA personnel, including an on-site review of the facility and interviews with management officials and employees. In exchange for OSHA recognition, VPP worksites are exempt from scheduled enforcement inspections. However, VPP worksites are still subject to inspections resulting from employee complaints and other significant events, such as fatalities. To attract additional VPP worksites and expand the overall program, OSHA has recently announced three new VPP initiatives: VPP Challenge: a program that will serve as a roadmap to help employers, particularly small employers, achieve VPP status regardless of their current level of safety and health. VPP Corporate: a program that offers a more streamlined application process for corporations that already have worksites in VPP and want to bring additional worksites into the program. VPP Construction: a program that builds on information learned at previous VPP demonstration worksites and is designed to make it easier for construction worksites, particularly temporary worksites, to apply for and attain VPP status by, for example, reducing the amount of time that safety and health improvements must be in place. The Strategic Partnership Program, formalized in 1998, is designed to help groups of employers and employees working at multiple worksites in high- hazard workplaces to address a specific safety and health problem. As of September 2003, 66 percent of partnerships are construction-related. A partnership agreement sets goals, such as the reduction of injuries, specifies a plan for achieving them, and provides procedures for verifying their completion. Some partnership agreements may also require the development of a safety and health management program and the involvement of employees in carrying out the partnership agreement. The program does not offer exemption from enforcement inspections but does offer other incentives. These include limiting scheduled inspections on only the most serious prevailing hazards, penalty reductions for any hazards cited during an inspection, and priority consideration for the State Consultation Program. Partnerships can be developed on an area, regional, or national basis. When a national partnership is established, it must be implemented in all area and regional OSHA offices where a partner has a worksite. For example, the Associated Builders and Contractors created a national Strategic Partnership with OSHA that was implemented at the local level between the association’s chapters and area and regional OSHA offices. As of September 2003, there were 205 operating Strategic Partnerships in federal OSHA states, about 87 percent of which represented industries or areas of emphasis in OSHA’s Strategic Management Plan (see fig. 6). OSHA officials attributed the fact that so many partnerships are construction-related to the national partnership with the Associated Builders and Contractors. This partnership provided a template from which other construction partnerships were developed. Additionally, OSHA officials informed us that, because it was originally difficult for construction worksites to enter into VPP, employers in the industry who wanted to enter into a voluntary compliance program with OSHA had tended to form a strategic partnership. While a few strategic partnerships are very large, most participating worksites are small businesses with 50 or fewer employees. The Alliance Program targets trade, professional, and other types of organizations to work collaboratively with OSHA to promote workplace safety and health issues. Alliances can be formed through national or regional offices. As of September 2003, approximately 51 percent of OSHA’s national alliances were with trade associations and 38 percent were with professional associations. The Alliance Program, which included 100 alliances as of September 2003, is one of OSHA’s newest and least structured voluntary compliance programs. In contrast to the other three voluntary compliance programs which typically include safety and health programs at specific employer worksites, alliance agreements focus on goals such as training, outreach, and increasing awareness of workplace safety and health issues. To date, alliances have participated in a variety of activities, such as (1) creating electronic informational tools that have been posted on the OSHA Web site, (2) developing industry- specific voluntary guidelines and training materials, and (3) improving OSHA’s training courses. Alliance members are not exempt from OSHA inspections and do not receive any enforcement-related incentives for joining an alliance. Instead, OSHA officials informed us that trade and professional associations have used the Alliance Program as a proactive method of addressing existing and emerging workplace safety and health issues, such as ergonomic issues. As of September 2003, 41 percent of OSHA’s national alliances were ergonomic-related. See figure 7 for an example of an ergonomic-related alliance. In addition to its voluntary compliance programs, OSHA conducts numerous training and outreach activities on a variety of safety and health issues. These activities augment both the voluntary compliance programs and OSHA’s enforcement program, according to OSHA officials. For example, outreach activities can be conducted in relation to inspections, in an attempt to help employers ready themselves for an inspection. The OSHA Training Institute offers 80 courses on a range of safety and health issues, most of which are available to the public as well as to OSHA employees for training. In fiscal year 2003, however, the majority of its almost 5,000 students were OSHA employees. In addition to the Training Institute, OSHA has 33 Education Centers, nonprofit organizations (mostly universities), which have agreements with OSHA to teach 16 of the most popular Training Institute courses. An agency official told us that using these Education Centers around the country has allowed OSHA to greatly expand the amount of nonagency personnel who receive training in safety and health issues. In fiscal year 2003, these centers trained almost 16,000 students, approximately 98 percent of whom were non-OSHA personnel. Through a grant program, the agency also distributes some funds to nonprofit organizations to develop training or educational programs about safety and health issues of current emphasis in OSHA’s Strategic Management Plan. In fiscal year 2003, OSHA funded 67 such training grants totaling over $11 million. The agency provides outreach to employers and workers in a number of other ways, such as through newsletters, brochures, compact discs, speeches, and conferences. OSHA also mails materials on specific safety and health issues to target audiences. Regional officials we spoke with said that several OSHA staff are called upon to conduct outreach efforts because it requires specialized skills and knowledge of standards. OSHA also works in cooperation the U.S. Small Business Administration’s Small Business Development Centers. Additionally, OSHA recently revised its Web site, which provides informational tools and referrals on a variety of safety and health issues. For example, OSHA has a Web service entitled “eTools,” which offers detailed graphics about specific worksite hazards, how to remedy them, and how OSHA regulations apply to worksites. While voluntary compliance strategies directly reach relatively few of the nation’s employers, participant numbers have grown since 1998 with the build-up of programs. Also, in the last decade, most of the programs have experienced tremendous growth in the number of employers and organizations that participate. For example, the VPP has increased from 122 worksites in 1993 to 1,024 worksites in 2003, an increase of 739 percent, and the Strategic Partnership Program grew from 39 partnerships in 1998 to 205 existing partnerships in 2003, a 426 percent increase. (See fig. 8.) OSHA plans to expand the number of voluntary compliance program participants and its compliance assistance activities and has established strategic goals for doing so. According to OSHA officials, the agency’s fiscal year 2004 goals include the addition of 45 new VPP worksites and 50 VPP Challenge worksites, as well as 50 new strategic partnerships and 75 new alliances. Furthermore, OSHA officials have set a target goal of increasing the number of VPP worksites eight-fold—from 1,000 worksites to 8,000 worksites. Although it is difficult to quantify, the voluntary compliance programs appear to have extended the agency’s influence. For example, through the agency’s enforcement program, OSHA and its state partners conducted almost 96,000 inspections in 2002—reaching no more than and probably fewer than 96,000 worksites. The VPP and Strategic Partnership Program in 2003 directly reached some 6,000 employers, who may not have otherwise been selected for OSHA inspections. These two programs, together with the State Consultation Program, covered approximately 2.3 million of the more than 100 million employees under OSHA’s oversight. Additionally, although OSHA may not have direct contact with an employer as part of its Alliance Program or training and outreach activities, employers are reached indirectly through the dissemination of safety and health information, which, according to our discussions with Alliance participants, has helped employees learn about workplace safety and health issues. The resources OSHA devotes to its voluntary compliance strategies consume a significant and growing portion of the agency’s limited resources. In fiscal year 2003, OSHA executed its numerous programs under a $450 million budget. The agency spent $126 million on its voluntary compliance programs and compliance assistance activities— approximately 28 percent of its total budget—and about $254 million, about 56 percent of its budget, on enforcement activities. The percentage of resources dedicated to voluntary compliance programs and compliance assistance activities has increased by approximately 8 percent since 1996, when these programs represented about 20 percent of the agency’s budget. During this same period, the proportion of resources OSHA dedicated to its enforcement activities fell by 6 percent, from about 63 percent to about 56 percent of the agency’s total budget, although the total funds devoted to enforcement have remained fairly constant because of increases in OSHA’s total budget over this period. In addition, enforcement efforts, as measured by the number of inspections, have remained constant or increased slightly each year, according to agency officials. While it cannot be determined that resources were directly redistributed from enforcement to compliance assistance activities, funding for OSHA’s other programs remained relatively stable, with only small increases or decreases in funding since 1996 (see fig. 9). Notwithstanding their voluntary nature, all of OSHA’s voluntary compliance programs require agency oversight to ensure that participants comply with requirements or agreements and this growing administrative responsibility requires concerted agency resources. For example: To certify a worksite as a VPP worksite requires a comprehensive on-site review that usually lasts 1 week and involves approximately three to five OSHA personnel. In contrast to the SHARP on-site review, VPP employers use their own resources to implement their safety and health programs and the program must be functional before OSHA personnel come to the worksite to conduct the on-site review for Star approval. Additionally, OSHA reviews participants’ yearly self-evaluations to ensure that injury and illness rates have not increased beyond program requirements and any changes that have been made to the safety and health program. Furthermore, VPP worksites must be recertified every 1 to 5 years, depending on their VPP designation. A VPP worksite recertification involves an additional on-site review by OSHA personnel, similar in duration and comprehensiveness to the original on-site review. In the Strategic Partnership Program, OSHA conducts verification inspections for a percentage of partner worksites to ensure that partners are abiding by the partnership agreement. The Alliance Program involves quarterly meetings with Alliance members to ensure progress towards alliance goals are being met. Additionally, OSHA training staff reviews all alliance training materials to ensure their accuracy with OSHA standards. Furthermore, while the State Consultation Program is run by the states, OSHA largely funds the program, and its plans to expand the SHARP program will require additional agency resources and oversight on the part of state consultants. State consultants must work closely with employers to help them improve and implement their safety and health programs because most small employers do not have the resources necessary to attain SHARP status on their own. The consultant also must conduct a one-or-more day on-site review of the worksite to ensure that the employer has addressed all workplace hazards and properly implemented a safety and health program. Additionally, SHARP worksites are reevaluated every 1 to 2 years, depending on the amount of time a worksite has been in the program and the recommendation of the state consultant. These reevaluations require another on-site review of the worksite by the state consultants. Expansion of OSHA’s voluntary compliance programs as planned will further require such resources, particularly for the oversight of a much larger number of program participants. According to national and regional OSHA officials we spoke with, expanding the voluntary compliance programs to the intended levels will be difficult given OSHA’s current resources, and some expressed concern that too much expansion of some programs may compromise program quality. Of particular concern to them, they said, has been the agency’s continued focus on increasing the number of VPP worksites. Several regional officials—whose offices are responsible for conducting on-site reviews—said that increasing the number of VPP worksites would strain their resources because of the number of staff required to conduct reviews of new worksites and re- certifications of existing worksites. To date, regional offices have been creative in their methods of handling the increasing number of participants in voluntary compliance programs. For instance, the offices have relied increasingly on the use of the Special Government Employee Program and the Mentoring Program, both a part of the VPP. The first allows employees from VPP worksites, at the expense of their employers, to assist OSHA employees in conducting on-site reviews. OSHA uses the Mentoring Program to match VPP candidates with VPP employers, who assist the candidates in improving their safety and health programs and preparing for the on-site reviews. These two programs allow OSHA to leverage its resources by using employees at VPP worksites to assist OSHA in carrying out the responsibilities involved in operating the program, decreasing the number of OSHA personnel needed. While several regional OSHA officials said these strategies have allowed them to manage the increase in VPP applicants, they are unsure how many more they can accommodate without obtaining additional resources. While OSHA’s voluntary compliance strategies have increased the number of worksites the agency reaches, and participants and others have provided enthusiastic testimony regarding their ability to foster better safety and health practices, the lack of comprehensive data on the outcomes of the programs has hindered our ability to assess their effectiveness. Employers we visited said their participation had reduced injury and illness rates, which in turn had lowered their workers’ compensation costs. These employers and many employees we interviewed also credited OSHA’s voluntary compliance programs with improving employee-management relationships and their relationships with the agency. However, although OSHA has begun to collect data on the impact of some of its voluntary compliance programs, it does not yet have the data needed to assess the effectiveness of these programs, or make decisions about how to allocate its resources among the programs. The employers and employees at the worksites we visited, OSHA officials, and researchers and occupational safety and health specialists identified many benefits of OSHA’s voluntary compliance programs. The most commonly cited benefit of participating in OSHA’s voluntary compliance programs was the reduction in the number and rate of injuries and illnesses. All nine employers we visited reported that the number of injuries and illnesses at their worksites had declined since they began participating in the programs. For example, one VPP site in the paper industry reported that it typically had 12 to 14 accidents that resulted in injuries each year before working toward VPP approval, but that the worksite has reduced that number to 5 accidents or fewer in the last 3 years. Another participant, a partnership comprising eight nursing homes, reported in its annual evaluation that the injury and illness rate for its second year of participation had decreased 27 percent. OSHA, based on limited analysis of VPP sites’ annual injury and illness data, reported that participating employers that had effectively implemented workplace safety and health programs had significantly fewer injuries and illnesses—54 percent fewer—than comparable worksites in the same industries that had not implemented such programs. A second benefit of voluntary compliance programs is decreased costs to employers, primarily through reductions in workers’ compensation premiums. Employers at the sites we visited reported that they had seen significant decreases in their workers’ compensation costs. For example, a meat packaging facility we visited estimated workers’ compensation costs savings of about $200,000 during the period in which it had been involved with VPP. In addition to lowered workers’ compensation costs, employers commented that improvements in safety and health had reduced employers’ cost of lowered productivity that resulted from employees missing work because of injuries and illnesses. Although OSHA has information on its Web site on how reducing injuries such as by implementing safe procedures can save employers money, it does not include information on specific industries. OSHA officials told us that, although the experiences of some companies in saving money through safety improvements could be helpful to other employers, some companies are reluctant to share their data on cost savings with OSHA. However, the agency is developing some of the information through its Alliance Program. For example, the objectives of one alliance with a health care company include developing and incorporating materials into business school curricula that communicate the business value and competitive advantages associated with implementing comprehensive safety and health programs in the workplace. According to employers and employees at worksites we visited, voluntary compliance programs also improved their relationships with OSHA and improved the relationships between management officials and employees. At every worksite we visited, representatives told us they were very comfortable with interacting with OSHA. Some spoke of a change from fearing OSHA’s visits to seeing them as helpful. For example, management officials at a steel erector company commented that, before their partnership, management did not want to talk to OSHA and dreaded its visits whereas, after participating in the partnership, they have a good relationship with OSHA staff. Several representatives at the worksites we visited also commented that they now regularly call OSHA for answers to safety problems. Some employees at the sites also commented that they have seen improved relationships with OSHA. For example, the union president at a VPP site said that, as a result of the close interaction with OSHA staff during the VPP approval process, he feels comfortable calling OSHA directly to discuss safety and health issues. Similarly, employees and employers at several worksites gave examples of how their participation in these programs resulted in improved relationships between management and employees. One safety director for a union involved in a partnership said that after some workers were fired for not complying with safety rules, they came to the union looking for support, but because of the involvement of the union in the partnership, the union supported the disciplinary action. Both management and employees recounted how important working together was during the approval process and how those efforts have continued in order to maintain their participation in the programs, often through team meetings and safety committee meetings. Employers and employees at the workplaces we visited also reported a shift to a safety culture in which they all take responsibility for safety, thereby contributing to improved productivity, morale, and product quality. At all the sites we visited, employees spoke of being empowered to remind others to comply with safety requirements. Several described a shift in attitude from noncompliance to one in which good safety procedures, such as wearing appropriate personal protective equipment and inspecting equipment, were ingrained in daily activities. They also said that they felt good that management had made the additional investment in safety. In addition, management officials at several sites said that this increased attention to safety had benefited their firm in other ways. Some mentioned that others using their services reviewed the company’s safety records or training, and that the company’s recognition as an exemplary site gave them a competitive advantage. For example, management officials at one SHARP site whose workers construct facilities on their clients’ worksites said that the SHARP certification helped the company continue to get contracts for projects. At one VPP site, management representatives also told us that participation had brought an improved workplace ethic where employees felt management cared about them, lower absenteeism rates, and a more disciplined approach to work. In addition to the more anticipated benefits of improving injury and illness rates and reducing employers’ costs, participants commented that VPP, SHARP, and Strategic Partnership Program participants played a role in influencing other employers to implement good safety and health practices. A key component of VPP is outreach to other firms, and representatives at all three VPP sites we visited spoke of mentoring to others in their industries. For example, one site hosted a VPP Day to encourage others within its industry to participate in the program. Interestingly, one of the VPP sites we visited had been encouraged by other VPP sites to participate in the program. Participants in the Strategic Partnership Program and SHARP sites we visited also reached out to others within their industry, informing them of the value of good safety and health practices and encouraging their participation in OSHA’s voluntary compliance programs. Some specialists with whom we spoke commented on the value of this aspect of the programs, although one noted that the mentoring focus should be on improving employers’ safety and health practices, not on helping employers complete the program application paperwork. Employers participating in these programs also sometimes influenced other employers’ practices by requiring them to meet certain standards if they were working on the participating company’s premises or to qualify as one of the company’s subcontractors. In some cases, they also reported that other companies sought them out as suppliers and contractors because of their good safety records. OSHA officials also noted that participation in voluntary compliance programs could influence those companies’ suppliers and contractors to improve their safety. For example, they told us that many construction contractors now require their subcontractors to have insurance rates below a certain level—rates that are based on their injury and illness rates. Several participants and specialists reported that the State Consultation Program, Alliance Program, and OSHA’s outreach and training help inform small employers—who typically have less in-house expertise to address safety and health issues—about how to make safety and health improvements. The State Consultation Program, which is designed to provide guidance on specific problems or, more generally, employers’ health and safety management programs, is targeted to small employers. The three sites we visited that utilized this program had initially sought consultations because they needed expert advice on safety and health practices that was not available from their own staff. According to several specialists, the Alliance Program also connects with small businesses by working through the trade associations that they participate in, because the associations build on already existing relationships. In addition, the OSHA regional offices we visited had used several outreach approaches to reach out to small employers, for example, a free forum where small contractors could learn the proper use of cranes and scaffolding. Similarly, one of OSHA’s area offices provided employers training courses at the local Small Business Development Center on OSHA’s requirements— including its record-keeping requirements—and how good safety and health practices can save them money. Regional offices have also developed newsletters for employers in specific industries, such as a letter and accompanying compact disc on electrical hazards provided by one office to electrical contractors. Although we saw evidence of OSHA’s efforts to reach more small businesses, several specialists said OSHA should include more small businesses in voluntary compliance program activity. A representative from a national employers association commented that smaller employers fear OSHA because they do not know what to expect when the agency goes into a business, even if for compliance assistance activities. Several specialists with whom we spoke noted that smaller businesses might not be aware of the voluntary compliance programs that are available. A representative from an insurance company who addresses risk management regularly commented that smaller worksites, particularly those that change locations frequently such as sites in the construction and roofing industries, are more likely to have safety problems. OSHA currently lacks the data needed to fully assess the effectiveness of its voluntary compliance programs. Developing outcome measures is difficult, particularly when factors other than program participation can affect key indicators such as injury and illness rates. However, agencies are required to develop such measures and it is especially important for OSHA, given its limited resources, to be able to evaluate the effectiveness of these programs. Currently, OSHA does not collect complete, comparable data needed to measure the value of its programs, including their relative impact, resource use, and effect on the agency’s mission. In OSHA’s current strategic management plan, one of the agency’s three goals includes increasing the number of participants in voluntary compliance programs and improving the programs’ effectiveness. Another goal includes improving the agency’s access to accurate, timely data, and enhancing its measures for assessing the effectiveness of its programs. However, OSHA has not yet developed a comprehensive strategic framework that articulates how the programs fit together in accomplishing the agency’s goals or how its resources should be allocated among the various programs. While OSHA or its state representatives ensure that voluntary program participants are complying with the programs’ requirements, and often obtains some information on program effectiveness, such as data on injuries and illness, it does not assess the overall impact of the programs on worksites’ safety and health. Currently, OSHA’s assessments of each program are at a different stage of development and the approaches vary: VPP—Presently, OSHA’s analysis of the program is limited to reviewing VPP sites’ annual injury and illness rates in the years immediately before they are approved for the program. However, because worksites often make safety and health improvements over a longer period in anticipation of their participation in the program but before they are approved, the rates OSHA reviews may not reflect changes in injury and illness rates from improvements made as a result of their participation. To assess the impact of VPP, OSHA contracted with a private firm in October 2003 to conduct an evaluation of the changes in participating employers’ injury and illness rates resulting from the program. The evaluation, to be completed in September 2004, will evaluate the impact of VPP from the point at which employers decide to apply for VPP until they are designated a VPP site. It will also determine the impact of VPP on other worksites through participating employers’ outreach and mentoring efforts and provide data on dollars spent by VPP sites on safety and health programs and cost savings from reduced workers’ compensation costs. However, because VPP does not require applicants to provide data on their injury and illness rates for the years prior to participation, OSHA will still be unable to systematically assess whether improvements in their injury and illness rates resulted from program participation. State Consultation Program—OSHA has been assessing possible approaches for obtaining data on these programs, but it has been difficult because of the confidentiality that state programs provide to program participants. In an October 2001 report on the program, we suggested that OSHA collect additional data to use in evaluating its impact. In 2002, an OSHA-sponsored evaluation of the program concluded that the program resulted in some positive outcomes, including that participating worksites (1) were cited for fewer serious violations if inspected by OSHA within 2 years of the consultation visit and (2) had larger average declines in lost workday injury and illness rates than other worksites. The report, however, noted that factors other than the consultation program might have contributed to these positive outcomes and that further analysis, particularly the long-term effects of the program, would require the collection of more data. OSHA attempted to collect such information through a data initiative it uses to obtain information on the impact of its enforcement efforts. However, in 2002, the Office of Management and Budget denied OSHA permission to extend this data collection effort to collect data from all employers, including those with less than 40 employees. These employers represent a significant portion of the employers that participate in the State Consultation Program, but are not presently addressed in the data initiative. Strategic Partnership Program—Currently, OSHA requires program participants to file annual evaluation reports. However, according to an OSHA-requested study of reports submitted through September 30, 2002, the agency did not collect consistent information from partnerships or use common performance measures. For example, some partnerships did not submit evaluation reports, while others provided incomplete or inconsistent information because OSHA allowed participants to select the types of data reported. Similarly, the U. S. Department of Labor’s Office of Inspector General, which also assessed the program, reported in September 2002 that there was insufficient information on five of the nine partnerships it analyzed to evaluate their impact. For example, one partnership only provided data on injuries and illnesses for 25 of its 222 participants. In response to these studies, OSHA officials said that they were obtaining comments on a revised format for these reports that would include common data elements for all partnerships and that OSHA would then be able to establish a new database designed to track consistent measures across partnerships. The revised format for the partnership reports will be available in Spring 2004, according to an OSHA official. Alliance Program—Goals for each alliance are individually developed and are often not readily measurable. Currently, OSHA monitors goals and accomplishments of individual alliances by participating in quarterly meetings and preparing annual evaluations. OSHA officials told us that OSHA has not yet developed an evaluation approach for the national program. Several individual Alliance Program representatives from alliances established in 2002 told us that they have not established a system for assessing the impact of their alliances, and some commented that this would be difficult, given the nature of their goals. For example, one alliance’s goals are to provide information and guidance to help protect employees’ safety and health—particularly from hazards likely to result in amputations and ergonomic hazards—and to provide training to employers to help identify and correct these hazards. While the alliance has provided information to employers and workers on its Web sites, and has developed and provided training, it is difficult to determine the impact of the alliance since companies also implement safety and health improvements on their own. Researchers, safety and health practitioners, and other specialists we interviewed suggested a variety of additional strategies for voluntary compliance, some of which might require legislative changes. Some strategies might help OSHA leverage its existing resources and others suggest the need for additional resources. The strategies that researchers and specialists proposed generally fell into four categories: (1) providing more incentives to encourage additional employers to voluntarily improve safety and health in the workplace; (2) promote more systematic approaches to workplace safety and health; (3) focusing more specifically on high-hazard, high-injury workplaces; and (4) using third-party approaches to achieve voluntary compliance. While these strategies could be potentially useful and effective, according to specialists, they could also entail the need for safeguards, oversight, and enforcement. (See table 2.) Specific suggestions for additional financial incentives included (1) providing information to employers on the possible financial and other benefits of improving safety and health, (2) encouraging the use of workers’ compensation incentives for employers that participate in OSHA’s voluntary compliance programs, and (3) creating tax incentives for improvements. Another suggestion was to deter employers from continuing poor safety and health practices by publishing injury and illness rates for such worksites. Develop and Publicize Information on Financial Benefits—To counter employer assumptions that safety and health improvements would necessarily be costly, some specialists called for the agency to develop and publicize more industry-specific data about the financial and other benefits possible by investing in safety and health improvements. OSHA provides general information about the direct and indirect costs of injuries and illnesses on its Web site and is working to develop more industry data through its alliances and other voluntary programs. As useful as industry- specific information might be, especially to small employers, such proprietary data are difficult to obtain and expensive to develop, according to specialists. Developing this information might also be a better role for industry than for government. In addition, improving safety and health could cost employers more money, not less. Encourage Workers’ Compensation Incentives—OSHA could encourage state programs and private insurers to consider employers’ participation in voluntary compliance programs when they calculate premiums for employers. Such incentives could include, for example, reductions in employers’ insurance premiums or credits for participation. For many employers, the possibility of achieving lower insurance premiums could be a significant motivator for improving workplace safety and health. However, because each state has its own laws governing workers’ compensation programs, it could prove challenging to create such financial incentives. Furthermore, although some insurers offer rate reductions to employers that participate in OSHA’s voluntary compliance programs, according to OSHA officials, the agency’s other attempts to work with insurers have not succeeded because they did not want to have clients perceive them as an agent of OSHA. Figure 10 describes the relationship between workers’ compensation and occupational safety and health programs in two states. Offer Tax Incentives for Capital and Other Improvements—Tax incentives, which would require changes in the tax code, may be especially useful for helping small employers who might lack resources make safety and health improvements. Having a tax incentive also signals to businesses that the government values such investments in safety and health. However, distinguishing business purposes from safety and health purposes can be difficult with tax incentives. Tax incentives could also tend to favor capital-intensive solutions to safety and health problems— such as the purchase of equipment—rather than behavioral or systematic solutions. They may also subsidize improvements that employers might have made in any case. Finally, in addition to the potential for manipulation, using tax incentives could entail lost tax revenues, without OSHA knowing their impact on safety and health. Publish Injury and Illness Rates for Employers’ Worksites— Gathering and publishing the injury and illness rates for employers’ worksites could build on market incentives to pressure employers to change their practices. For example, a subcontractor might find it difficult to obtain a contract because of liability concerns if it were known that the company had a high rate of workplace injuries. Right now, OSHA publishes the names of about 3,200 worksites identified through its site- specific targeting program as having high rates of injuries and illnesses, but it does not publish the actual rates for these worksites. However, there are several potential problems with this approach. First, injury and illness rates for one particular year may not accurately capture the performance of employers, especially small employers. Second, businesses would likely oppose the publication of these data because they view worksite injury and illness rates as confidential business information that, if published, could allow business competitors to glean information about the company’s productivity. In fact, OSHA took this position when it denied, in July 2003, a Freedom of Information Act appeal that sought to obtain rates for specific worksites: OSHA relied on an exemption protecting trade secrets and commercial or financial information and refused to disclose the information unless the parties in question approved. Finally, publicizing injury and illness data might also pressure employers to underreport injuries and illnesses, creating the need for further policies or legislation requiring full and accurate reporting and recording. Specific suggestions from researchers and specialists included (1) encouraging employers that participate in OSHA’s voluntary programs to influence their contractors and suppliers to make safety and health improvements, (2) requiring certain employers to have a safety and health management program, and (3) requiring employers to have an employee- management safety and health committee. Encourage Employers to Influence Contractors and Suppliers—To “influence the supply chain,” OSHA could encourage employers participating in its voluntary programs to consider suppliers and contractors’ safety and health records before making contracting decisions and encourage their suppliers and contractors to have a safety and health program. OSHA currently requires this approach of employers participating in VPP. Because employers are increasingly using contractors and temporary workers, focusing employers’ attention on contractors’ and suppliers’ safety and health records was considered a useful way to achieve some leverage in a changing economy. Safety and health problems at contractors and suppliers could also entail potential costs for employers or could also indicate other forms of poor management, such as poor product quality. As an example of the effectiveness of this approach, one automaker grouped its suppliers into three tiers according to their safety and health records, and then each tier of suppliers pressured the lower tier to improve, according to a specialist we interviewed. However, the degree of an employer’s potential influence over suppliers and contractors could vary by employer size as well by industry. For example, a Fortune 500 employer could have far more influence on its supply chain than a small body shop, and the construction industry, which relies on numerous subcontractors working under a general contractor, may be better able to influence subcontractors than other industries. Implementing this approach may be difficult because suppliers and contractors may be unwilling to share their safety and health records or plans and employers would need staff to conduct such reviews of its suppliers and contractors. It may be similarly difficult for OSHA to monitor and verify this process through employers participating in its voluntary programs. Require Certain Employers to Have Safety and Health Management Programs—As discussed in a previous GAO report and testimony, OSHA could require certain employers—such as those with high injury and illness rates—to have safety and health management programs. The establishment of safety and health programs, including elements such as hazard prevention and control, is currently required for participants in the VPP, Strategic Partnership, and SHARP programs. Extending this approach to other employers could help prevent additional injuries and illnesses. It could also help employers respond more flexibly to advances in technology and other workplace issues than with specific standards. On the other hand, it could be difficult for OSHA to enforce employers’ use of safety and health programs, and small and mid-size employers may not have the information or tools to implement these programs without assistance. In our earlier testimony, we noted that reservations about these programs stem primarily from concern about implementation issues, rather than about their value. Require Employee-Management Safety Committees—OSHA could issue a regulation requiring employers to have an employee-management safety committee at every worksite to investigate accidents, settle disputes, and provide information to management. The VPP does not require employers to have such a committee at every worksite, but does consider it one way to achieve employee involvement. If required for all employers, such committees might raise additional issues, including the determination of who would represent workers. Specific suggestions for focusing more on high-hazard, high-injury workplaces included targeting employers with the highest levels of injury and illness for voluntary programs and having such employers choose between likely inspection and cooperative approaches with the agency. Target Employers with Highest Levels of Injury and Illness—OSHA could classify employers according to their level of injury and illness and focus the agency’s voluntary compliance efforts on those with the highest rates. While this suggestion was seen as a way for OSHA to best use its limited resources, according to specialists, it might also entail additional costs for data development. Allow Employers to Choose between Likely Inspection and Cooperative Approaches—Another suggestion was for OSHA to pursue a previously attempted strategy targeting employers with the highest rates of injury and illness. The agency informed these employers that they had been placed on a primary inspection list, but that they could reduce the likelihood of inspection if they chose to work cooperatively with the agency by fulfilling certain requirements. Now, when OSHA informs selected employers that they have among the highest injury and illness rates in the country, the agency refers employers to outside consultants, insurance carriers, and state workers’ compensation offices for advice on improving safety and health; employers with fewer than 250 employees are also referred to the State Consultation Program. Specific suggestions for third-party approaches included (1) supporting the development of a voluntary national or international standard for workplace safety and health and (2) using private consultants to conduct safety and health evaluations of worksites. Support Development of a Voluntary National or International Standard for Workplace Safety and Health—Having a voluntary national or international standard could help strengthen the infrastructure for workplace safety and health and could build on some employers’ desire for a widely recognized credential that could be useful to them in competing with other companies, especially in global markets. For example, employers can seek certification from independent organizations for achieving International Organization for Standardization standards. To a certain extent, OSHA is currently playing a role in the development of a voluntary standard, according to OSHA, since some OSHA staff members are assisting a national standards committee that is working on a safety and health standard. In addition, many industry associations are involved on this committee, OSHA staff noted. Limitations of this approach are that: (1) such standards are not mandatory, but serve more as flexible guidance, because they reflect agreements reached by committees; (2) although large employers competing in the international marketplace tend to seek out international standards credentials, these are not necessarily the employers needing OSHA’s attention; and (3) it can be difficult to set voluntary standards because organizations need to invest resources and provide appropriate expertise. Use Private Consultants to Conduct Safety and Health Evaluations—This suggestion would entail allowing employers to voluntarily use private consultants to conduct worksite safety and health evaluations and certify worksites, in return for incentives, such as a limited exemption from future inspections or reduced civil penalties. Using third-party, private-sector consultants to certify workplace safety and health was also proposed in the late 1990s as an amendment to the OSH Act—known as the SAFE Act. Using consultants could leverage existing OSHA resources by helping workplaces that might never otherwise see an OSHA inspector, especially small employers, and possibly also by enabling employers to address additional safety and health issues that might not be covered under an OSHA inspection for compliance with standards. At the same time, using consultants also raises various implementation, oversight, and legal issues described below. (a) Implementation—A key issue is that consultants’ independence may be compromised if employers paid consultants directly for conducting audits and certifying employers. Employers might also need more than one consultant to conduct a comprehensive review of both safety and health issues. Finally, the use of consultants would set the federal program in competition with the State Consultation Program, according to OSHA officials. (b) Oversight—One issue is what kind of oversight is possible when employers will not—or cannot—make improvements that consultants recommend. Another is that differences in consultants’ focus would create inconsistencies in the certification process, since a workplace evaluation could focus on compliance with OSHA standards or on the broader safety and health environment, as under OSHA’s VPP, Partnership, and SHARP programs. (c) Legality—Finally, constitutional issues have been raised as to whether OSHA can use private consultants, as envisioned by the SAFE Act, to conduct safety and health evaluations of employers’ worksites and to issue certificates of compliance, exempting employers from civil penalties for a limited period of time. For example, when Congress was considering the SAFE Act, the Justice Department argued that the act might be unconstitutional because, among other things, it delegated executive functions to private entities without providing adequate supervision or accountability for their activities. The Senate Committee on Health, Education, Labor, and Pensions, which had jurisdiction over the legislation, disagreed with Justice’s arguments, asserting that they reflected a misunderstanding of the proposed role and authority of third- party consultants. By many accounts, OSHA’s voluntary compliance strategies have improved employers’ safety and health practices by allowing the agency to play a collaborative, rather than a policing, role with employers. The testimony and enthusiasm of participants suggests that OSHA’s voluntary compliance programs have considerable value. The agency has begun to develop performance measures and collect data on some program outcomes, as well as undertake efforts to evaluate its programs, such as contracting for a VPP evaluation and revising the performance evaluation format for the Partnership program. However, because OSHA does not yet have comprehensive data on its voluntary compliance programs, the agency cannot fully assess the effectiveness of any single program or compare the relative effectiveness of the programs. OSHA should position itself to know, for example, the relative effectiveness of programs that focus on employers predisposed to following good safety and health practices as compared to those that attempt to reach employers and industries with poor safety and health records. Without such information, the agency is also limited in its ability to make sound decisions about how to best allocate its resources among individual programs, or between voluntary compliance programs and its other activities, particularly enforcement. After several years of experimentation and growth, this is an opportune time for OSHA to determine how to best target its voluntary compliance efforts. Having a mix of strategies appears useful in reaching different types of employers and industries. At the same time, having such a mix may unduly tax the agency’s resources unless it is accompanied by a comprehensive, strategic framework that establishes priorities and defines how these strategies fit together to accomplish the overall goals of the agency. Absent such a strategic framework, OSHA cannot ensure that it is making the best use of its resources to improve workplace safety and health. Furthermore, the agency must balance its plans to expand its voluntary compliance programs with its enforcement responsibilities. Given OSHA’s current resources, it is unclear how it can undertake much expansion without a careful assessment of the impact on its resources and other programs. Unless it has such an assessment, OSHA runs the risk of compromising the quality of its voluntary compliance programs. In order to strengthen OSHA’s voluntary compliance strategies, the Secretary of Labor should direct the Assistant Secretary for Occupational Safety and Health to identify cost-effective methods of collecting complete, comparable data on program outcomes for the VPP and Partnership programs to use in assessing their effectiveness, and continue to search for cost-effective approaches that will enable the agency to assess the effectiveness of the State Consultation and Alliance programs, and develop a strategic framework that articulates the purposes and distinctions of the different voluntary compliance programs, sets priorities among these programs, and identifies how the agency’s resources should be allocated among these programs, before further expanding them. We provided a draft of this report to OSHA for comment. OSHA’s formal comments and our responses are contained in appendix I. In addition to its written comments, OSHA provided us with technical comments, which we incorporated as appropriate. OSHA generally agreed with our findings, conclusions, and recommendations. The agency asserted, however, that we had based our recommendations on a small sample of worksites and that our methodology for selecting researchers and specialists was not scientific and was subject to biases. We did not base our recommendations on site- specific findings or on interviews with researchers and specialists, but rather on programwide data. More specifically, our recommendations were based on our analyses of OSHA’s program requirements and program data as well as the findings and conclusions reported in the OSHA- sponsored and Inspector General evaluations of these programs that were cited in our report. Although our selection of researchers and specialists was, by necessity, judgmental, we sought to obtain a broad, balanced range of perspectives and expertise about the programs’ effectiveness. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Labor and the Assistant Secretary of Labor for Occupational Safety and Health and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me or Revae Moran on (202) 512-7215 if you or your staff have any questions about this report. Other contacts and staff acknowledgments are listed in appendix II. 1. We added information on page 20 to show that OSHA’s enforcement efforts, as measured by the number of inspections, have remained constant or increased slightly each year from 1996 to 2003. 2. We did not base our recommendations on site-specific findings or on interviews with researchers and specialists, but rather on programwide data. Carol L. Patey and Christine A. Houle made significant contributions to this report in all aspects of the work and Leslie C. Ross assisted during the information-gathering segment of the assignment. In addition, Susan C. Bernstein, Julian P. Klazkin, Lynn M. Musser, and Walter K. Vance provided key technical and legal assistance. | Because the Occupational Safety and Health Administration (OSHA) can inspect only a fraction of 7 million U.S. worksites each year in its efforts to ensure safe and healthy working conditions, the agency has increasingly supplemented enforcement with "voluntary compliance strategies" to reach more employers and employ its resources most effectively. GAO assessed the types of strategies used, the extent of their use, and their effectiveness. GAO also obtained suggestions from specialists for additional voluntary compliance strategies. OSHA has implemented four voluntary programs, using a mix of strategies, that have extended its reach to a growing number of employers. For example, one program recognizes more than 1,000 worksites with exemplary records and practices while another focuses on hazardous industries, encouraging more than 200 employers to eliminate serious hazards. The agency plans to significantly expand its voluntary compliance programs over the next few years, although such expansion may tax its limited resources. OSHA's voluntary compliance programs appear to have yielded many positive outcomes, but the agency does not yet have adequate data to assess their individual and relative effectiveness. Employers and employees at nine worksites we visited attested to reductions in injuries and illnesses and improved relationships with one another and with OSHA. However, the agency has just begun to evaluate its programs and much of its data are insufficient for evaluation. For example, data on one program are inconsistent, making comparisons difficult, and goals for another program are individually developed and not readily measurable. The lack of such data makes it difficult for OSHA to articulate priorities and necessary resource allocations. The additional strategies that researchers and specialists suggested generally fell into four categories: providing more incentives to encourage additional employers to voluntarily improve workplace safety and health; promoting more systematic approaches to workplace safety and health; focusing more specifically on high-hazard, high-injury workplaces; and using third-party approaches to achieve voluntary compliance. |
GPRAMA made a number of changes to agency performance management roles, and provided the officials in these roles with specific duties. Among other things, the requirements for these roles reflected Congress’s intention to increase accountability of senior agency leadership for performance and results. Although these roles existed at some agencies prior to GPRAMA, it established them in law, added responsibilities, and elevated some of them. Later OMB guidance established additional performance management roles related to implementation of GPRAMA. The primary roles with responsibilities under GPRAMA and in OMB guidance are: Agency head: GPRAMA gave each agency’s head broad responsibility for performance management. Among other things, the agency head is responsible for identifying agency priority goals and, along with the COO discussed in the next paragraph, conducting quarterly priority goal progress reviews. Chief operating officer: The COO role existed at agencies prior to GPRAMA’s enactment, with responsibilities such as improving agency management and performance outlined in two presidential memoranda. GPRAMA maintained these previously established responsibilities, and added others to bring them in line with other GPRAMA requirements. It also required that the deputy agency head or equivalent serve as COO. Performance improvement officer: The PIO role was created by a 2007 executive order. GPRAMA established the role in law and elevated it, specifying that it be given to a “senior executive” at each agency and that the PIO report directly to the agency’s COO. The various duties of the PIO include advising the agency head and COO on goal-setting and measurement and reviewing progress toward agency priority goals. Deputy performance improvement officer: The deputy PIO role was not included in GPRAMA, but later OMB guidance directed agencies with a PIO who is a political appointee or other official with a limited- term appointment to also appoint a career senior executive as deputy PIO. Goal leader: GPRAMA directs agencies to identify a goal leader who is responsible for each priority goal and make this information available to OMB to be published online. Leaders for agency priority goals are identified, with their photos, on the performance.gov website. A similar position existed at some agencies prior to GPRAMA, as earlier OMB guidance had encouraged agencies to identify officials who were responsible for High Priority Performance Goals (these goals have since been renamed as agency priority goals). Deputy goal leader: The deputy goal leader role was not included in GPRAMA, but later OMB guidance directed agencies to identify a deputy goal leader to support the goal leader. In cases where the goal leader was a political appointee, OMB encouraged agencies to assign a career senior executive as the deputy. GPRAMA also assigned responsibilities to OPM related to agency performance management. By January 2012, the agency was to identify skills and competencies needed by government personnel for setting goals, evaluating programs, and analyzing and using performance information for improving government efficiency and effectiveness. GPRAMA also directed OPM, by January 2013, to incorporate these competencies into relevant position classifications and to work with each agency to incorporate the skills and competencies into employee training. OMB has a leadership and coordinating role in agency implementation of GPRAMA. OMB issued guidance on implementation through memoranda and in Circular A-11. Under GPRAMA, OMB is to ensure the operation of a public website that includes information on cross-agency priority goals and agency priority goals, among other performance-related information. This information is included on the performance.gov website. OMB’s Deputy Director for Management, or his or her designee, is directed to chair the PIC and preside at its meetings, determine meeting agendas, direct its work, and establish and direct its subgroups. GPRAMA also included specific requirements for the PIC. The PIC was initially created by a 2007 executive order, but GPRAMA established it in law and included additional specific responsibilities. In addition to directing OMB’s Deputy Director for Management to chair the PIC, GPRAMA specified that council membership include the PIOs from the 24 CFO Act agencies, as well as any other PIOs and individuals identified by OMB’s Deputy Director for Management. The PIC’s duties are detailed in GPRAMA and later OMB guidance and include facilitating the exchange of useful practices and developing tips and tools to strengthen agency performance management. According to the PIC’s Executive Director, the PIC and the Executive Director are supported by two federal employees and four contractors. The PIC also typically has two to four detailees from other federal agencies. The PIC is administratively located within the General Service Administration’s Office of Executive Councils. According to a PIC staff member, the Office of Executive Councils provides infrastructure, analytical support, and project management capacity to the PIC and other interagency management councils, such as the Chief Financial Officers Council. The 24 CFO Act agencies have all assigned senior-level officials to the key performance management roles—chief operating officer, performance improvement officer, and goal leader—required under GPRAMA, according to OMB and the results of our PIO survey. Figures 1 and 2 illustrate the performance management leadership teams at HHS and NSF, respectively. Chief operating officer. GPRAMA’s requirement that each agency’s deputy head, or equivalent, take on the role of COO helped to ensure high-level involvement in performance management. GPRAMA required the COO to be involved in activities such as quarterly performance reviews. As we discuss later, most (21) PIOs we surveyed reported that their agencies’ COOs were involved in quarterly performance reviews to a large extent. The COOs at both HHS and NSF told us they were involved in selecting their agencies’ priority goals and they chaired the quarterly performance review meetings. HHS’s COO said that he considered it his role to make sure that everyone in the agency paid attention to performance and knew that he considered it important. NSF’s COO said that she saw performance management as a primary concern and integrally connected to all aspects of her work at NSF. Performance improvement officer. Although the PIO role existed prior to GPRAMA, PIOs were not required to report directly to the COO until GPRAMA was enacted. GPRAMA elevated the role by putting this requirement in place. According to our PIO survey, at 8 agencies the PIO who was already in place took on the additional responsibilities required by GPRAMA. At the other 16 agencies, the current PIO began in the role after the implementation of GPRAMA. All PIOs we surveyed reported that they have other agency roles in addition to being the PIO. Most PIOs (21) reported that these roles gave them access and authority that has been helpful in the PIO role, and most (20) reported that these roles gave them knowledge and experience that has been helpful in the PIO role. These roles included planning, administration, management, and budget and finance. The PIOs at HHS and NSF were both also chief financial officers (CFO), and told us that their other agency roles helped them in their PIO roles. HHS’s PIO said that her joint role allowed her to align budget development with performance management—she made sure that the budget was built so that dollars were spent on efforts that would be able to perform well. NSF’s PIO told us that the two roles worked well together because much of the data used to evaluate performance at the agency was maintained by the agency’s budget division, which also reports to her in her CFO capacity. NSF’s COO noted that the PIO role at NSF was designated for reasons relating more to the individual, including the PIO’s past experience with performance management, than to her role as CFO. The COO said that NSF’s next PIO may not necessarily be the person in the CFO role. Although it was common for the PIOs we surveyed to also have the CFO role, PIOs whose other roles were in planning, administration, and management reported that those roles were helpful as well. Nearly all PIOs (23) reported that their level of authority and access to agency leadership helped them perform their duties. Additionally, almost half of surveyed PIOs (11) reported directly to the agency head in their non-PIO role; they had even higher level authority and access to leadership than the PIO role alone would have provided. OMB staff added that because they were senior level officials, PIOs had the authority and ability to assemble the right people to implement performance management at their agencies. Our survey results did not indicate great differences between political appointees and career civil servants in carrying out PIO duties. OMB staff told us that the requirement that agencies with political appointee PIOs have career civil servant deputy PIOs was in place to address the higher likelihood of turnover among political appointees. However, our PIO survey results did not indicate great differences in either time spent on PIO duties or turnover so far. Additionally, nearly all the agencies (22), not just those with political appointee PIOs, have chosen to designate deputy PIOs. Goal leader. Agencies assigned senior-level officials with expertise in the goal to take on the role of priority goal leader. This was true at HHS and NSF, and was also true across agencies, according to OMB staff. Officials at NSF and HHS said that priority goals had senior-level leaders who helped to bring attention and resources to the goal. For example, the U.S. Assistant Secretary for Health was one of two goal leaders for HHS’s priority goal on reducing tobacco consumption. He previously worked on tobacco cessation at the state level, was a public health professor, and published journal articles on tobacco control and health promotion. The goal’s other leader told us that because the Assistant Secretary was widely known within the public health community, his involvement brought respect and attention to the goal. As discussed previously, the PIO and goal leader positions were filled by senior-level agency officials. Most of these officials had other responsibilities, such as serving as a CFO. HHS and NSF officials told us that while the PIO and goal leader roles were performed by senior-level officials at the highest levels of the agency, they relied on deputies who generally managed the day-to-day aspects of performance management. Deputy performance improvement officer. Nearly all (22) of the CFO Act agencies have assigned officials to the Deputy PIO role, according to our PIO survey. Both HHS and NSF have assigned staff to the deputy PIO role. Officials at HHS and NSF also told us that PIOs tended to provide high-level vision and oversight and be a voice for performance management at their agencies, while deputy PIOs handled the day-to-day management of performance management for the agency. According to officials at HHS and NSF, responsibilities of deputy PIOs at these agencies included, among other things, coordinating with priority goal leaders, preparing for agency quarterly performance reviews, and attending PIC meetings, as appropriate. PIOs reported that most deputies devoted half or more of their time to performance-related duties. Deputy PIOs also had other roles and titles at agencies, similar to the PIOs. Based on our analysis of their other titles, deputy PIOs most commonly had other roles also related to performance, while others filled roles in areas such as budget and finance and administration and management. Deputy goal leader. HHS and NSF both assigned officials to deputy goal leader roles to support most of their goal leaders. As with the PIO/deputy PIO division, the goal leaders at our case study agencies were senior-level officials. According to agency officials, goal leaders provided high-level vision and oversight and lent their influence to ensure that the goal was prioritized, while their deputy goal leaders managed the goal on a day-to-day basis. For example, deputy goal leaders at these agencies were responsible for monitoring staff carrying out the goal and preparing reports on the goal for quarterly performance reviews. Two goal leaders we spoke with—one at HHS and one at NSF—had two deputy goal leaders. Moreover, at NSF, one goal leader we spoke with was supported by a deputy from a different operating division. The goal leader told us that this structure provided a cross-agency perspective and facilitated coordination. PIOs reported that they and other key performance management officials at their agencies were involved in central aspects of performance management. We asked PIOs whether agency heads, COOs, PIOs, deputy PIOs, and goal leaders had large, moderate, small, or no involvement in four primary tasks that summarize the performance management responsibilities required by GPRAMA: performance measurement and analysis; communicating agency progress toward goals; and agency quarterly performance reviews. strategic and performance planning and goal setting; As shown in figure 3, the PIOs we surveyed reported that most performance management officials had large involvement in these key tasks. Officials at HHS and NSF emphasized the importance of commitment to performance management at all levels, which was in part reflected in officials’ involvement in these key aspects. PIOs who reported large involvement for themselves generally reported larger involvement for other officials, suggesting that agencies with a strong commitment to performance management were following this philosophy. GPRAMA directed OPM to take certain actions to support agency hiring and training of performance management staff. As noted earlier, OPM, in consultation with the PIC, was charged with three responsibilities under GPRAMA: (1) identify key skills and competencies needed by performance management staff; (2) incorporate these skills and competencies into relevant position classifications; and (3) work with agencies to incorporate these key skills into agency training. OPM has completed its work on its first two responsibilities, and is working to support agency training. OPM identified 15 core competencies for performance management staff, in accordance with GPRAMA, and published them in a January 2012 memorandum from the OPM Director. (See app. III for a list of the competencies with definitions.) OPM also identified competences needed by PIOs and goal leaders. A manager of classification and assessment policy at OPM told us that OPM’s work to identify these competencies included a review of GPRAMA and related information. OPM also worked with a PIC working group focused on capability building to review the competencies it identified. Figure 4 shows PIOs’ responses to our survey question about the extent to which their staff had these competencies. PIOs generally reported that their staff had the competencies identified by OPM to a large extent, although they reported that the competencies below were not as widespread as others: Performance measurement: the knowledge of the principles and methods for evaluating program or organizational performance using financial and nonfinancial measures. Information management: the ability to identify a need for information, know how to gather information and organize and maintain information or information management systems. Organizational performance analysis: the knowledge of the methods and tools used to analyze program, organizational, and mission performance. Planning and evaluating: the ability to organize work, set priorities, and determine resource requirements. This includes determining short- or long-term goals and strategies to achieve them, coordinating with other organizations or parts of the organization to accomplish goals, and monitoring progress and evaluating outcomes. Some individual agencies found competency gaps in similar areas. HHS’s Deputy PIO told us that further improvement of HHS’s performance staff’s analytical skills would help the agency to more effectively implement GPRAMA. Also, as we reported in February 2013, Small Business Administration officials identified a skills gap among some of their staff in working with data. OPM officials had planned to develop a competency assessment tool that could be used to determine needs at each individual agency. Developing the tool was identified as a critical and manageable “next step” at a January 2012 meeting focused on incorporating key performance management competencies into agency training. Meeting participants included OMB, OPM, and agency members of the Chief Learning Officers (CLO) Council, which facilitates collaboration among CLOs. OPM officials told us that they took action to follow up on other “next steps” related to training identified at the meeting, which are discussed in the following section. However, an OPM official relayed to us that at this time, OPM does not plan to conduct a formal competency assessment using a competency assessment tool. A group manager in OPM’s Training and Executive Development division told us that the agency is focused on identifying critical skills gaps across the federal government. She said that some of the government-wide skills that OPM plans to focus on are related to skills needed for performance management, such as data analysis. OPM identified relevant position classifications that are related to the competencies for performance management staff, and worked with the PIC Capability Building working group to develop related guidance and tools for agencies. A manager of Classification and Assessment Policy at OPM told us that the competencies best fit into an existing classification series for management and program analysts. In addition, OPM worked with the PIC’s Capability Building working group to develop position descriptions for performance management staff. In December 2012, the Capability Building working group released to agencies a draft performance analyst position design, recruitment, and selection toolkit. The draft toolkit included position description templates for performance analysts, job opportunity announcement templates, and recruiting resources, among other information. NSF officials told us that they found the Capability Building working group’s performance analyst position description helpful and used it to develop the agency’s deputy PIO position, which has been recently filled. The toolkit may also be of use to other agencies planning to hire new performance management staff. About half (11) of PIOs reported that their agencies planned to hire new staff, in addition to training existing staff, in order to address competency gaps. According to OPM, the 15 core competencies for performance management staff are moderately to highly trainable, and OPM has taken steps to work with agencies to incorporate the competencies into training programs for relevant staff. Most (18) PIOs reported that their agencies planned to train staff in order to strengthen their performance management competencies. OPM’s Director stated in a January 2012 memorandum that the agency would work with CLOs to incorporate GPRAMA competencies into agency training programs. OPM worked with the CLO Council and the PIC Capability Building working group to develop a website—the Training and Development Policy Wiki—that lists some training resources for performance management staff. OPM also sponsored two webcasts focused on sharing agency experiences using performance management tools. A manager in OPM’s Training and Executive Development division told us that OPM was developing an interactive, online course focused on writing measurable performance goals that align with organizational goals, which she expected would be completed by July 2013. According to the official, both the webcasts and the work on the online course were the result of “next steps” identified at the January 2012 meeting between OPM, OMB, and members of the CLO Council. OPM Human Resources specialists said they were also working to help the PIC to develop a website with more extensive resources, including information on training as well as performance management career path information. According to OPM officials and the PIC’s Executive Director, the PIC will develop the content for this website and it will be modeled on OPM’s Human Resources University website, which provides human resources career path information and links to related training. A Workforce Development Manager at OPM said the agency will support the PIC on the technical aspects of the site based on its experience developing the Human Resources University site. According to OMB staff and OPM, the performance management website is scheduled to launch by the end of 2013. In addition to OPM’s actions on performance management training, individual agencies have taken action to develop their own performance management training to address competency gaps. For example, as we described in our recent report on quarterly performance reviews, SBA developed courses focused on skills such as spreadsheet development and analysis, presentation delivery, and other analytic and presentation skills. According to our survey of PIOs, less than half (9) of the PIOs we surveyed rated the level of access to and availability of performance management training at their agencies as helpful, as shown in figure 5. OPM’s efforts so far to work with agencies to incorporate performance improvement skills and competencies into agency training have been relatively broad-based and have not been informed by specific assessments of agency training needs. As described earlier, the agency has not followed through on its plan to measure agency staff competency levels in key areas required for performance management. Our survey results suggest that certain areas need to be improved, but without a more comprehensive assessment, it will be difficult for OPM to target its efforts—both to identify training that addresses agency needs, and to make training available through its performance website, which is under development, or through other means. PIOs we surveyed generally found the PIC’s work to be helpful to their agencies. We asked the PIOs to rate the helpfulness of selected functions that GPRAMA and OMB guidance direct the PIC to perform. As shown in figure 6, PIOs generally rated the PIC’s work in these areas as helpful. The PIC’s work promoting communication and developing tools incorporated several practices that our past work has identified as necessary for building collaborative working relationships. These include establishing means to operate across agency boundaries and identifying and addressing needs by leveraging resources. Most (17) PIOs we surveyed reported that they have been able to apply successful practices and other information and tools shared by the PIC. PIOs surveyed reported some examples of information shared by the PIC that they applied at their agencies, including information on performance management positions, goal-setting, and quarterly performance reviews. PIOs we surveyed and agency officials we interviewed reported that the PIC has been particularly helpful in facilitating the exchange of successful practices among agencies. An OMB staff member told us that facilitating this type of exchange was the PIC’s greatest strength, and that doing so also helped the group identify best practices. As shown in figure 6, just over half of PIOs reported that the PIC was very helpful in this area. Senior agency officials we interviewed and PIOs we surveyed provided examples of ways in which the PIC facilitated information exchange. For example, a PIO we surveyed reported that his agency’s approach to quarterly performance reviews was informed by examples shared by other agencies in the PIC’s Internal Agency Reviews working group. An OMB staff member told us that in addition to helping agencies, the PIC’s facilitation of information exchange has benefitted OMB. One of the tasks GPRAMA charged the PIC with was submitting recommendations to streamline and improve performance management policies and requirements to OMB. PIOs we surveyed generally reported that this function was helpful to their agencies. An OMB staff member told us that the PIC helped OMB staff determine best practices and use that information to inform policy. For example, they said that in response to feedback from PIC members, OMB added information on “other indicators” to its 2012 Circular A-11 guidance. Another area in which PIOs reported that the PIC was particularly helpful was in developing and providing tips, tools, training, and other capacity- building mechanisms. Half of PIOs reported that the PIC was very helpful in this area. PIOs we surveyed and officials we interviewed reported various ways in which their agencies have used PIC information. For example, five of the PIOs we surveyed reported that their agencies used PIC information on goal setting. The PIC holds two types of meetings—a “principals only” meeting open to PIOs only, and a broader meeting open to PIOs as well as other agency staff—both of which are well attended, according to PIOs we surveyed. OMB staff told us that these two types of meetings were generally held on alternating months. As shown in figure 7, most of the PIOs we surveyed told us that they regularly attended the “principals only” PIC meetings. OMB staff told us that in order to encourage senior-level attendance, PIOs were not permitted to send substitutes in their place. Most PIOs also reported that their deputies or other staff members regularly attended the broader PIC meetings. OMB staff estimated that two to three representatives from each agency typically attended these meetings. In our previous work, we identified regular participation in activities such as meetings as an important feature of effective collaboration. Agency participation in PIC working groups was also strong, and PIOs and other agency officials reported using information and products shared through working groups. The PIC established five working groups, three of which were actively meeting at the time of our review, to focus on specific topics (see table 1). According to the PIC’s Executive Director, PIC working groups focused on issues related to implementation of GPRAMA and related guidance and provided a forum for staff from different agencies who were working on similar issues to connect with each other. He told us that the PIC identified the working group topics based on informal input from PIC members, though the council might in the future solicit more formal input through a survey. Agencies could also participate in separate OMB working groups that focused on informing policy and guidance. For example, at the time of our review, an OMB working group was focusing on informing guidance for strategic planning required under GPRAMA. In addition to the three working groups described in table 1, there were two PIC working groups—the Goal Setting and the Benefits Processing working groups—that no longer regularly met, according to OMB staff. OMB and PIC staff said that these groups could restart again in the future, if agency needs arise. The Goal Setting working group focused on helping agencies set priority goals for fiscal year 2013, and produced a draft guide to goal setting. OMB staff said that the group may start meeting again to focus on strategic goals and objectives or on the next round of priority goal setting. In addition, the Benefits Processing working group, which focused on promoting consistency in agencies’ benefits processing, was no longer regularly meeting because it had completed its tasks. Most (18) PIOs we surveyed reported that they or other staff members from their agencies participated in at least one working group, with some agencies participating in multiple groups and one agency participating in all five. These 18 agencies participated in an average of three working groups each, according to PIOs. PIOs reported that they generally did not participate personally in working groups. At HHS, the Deputy PIO said that both he and members of his team participate in working groups. PIOs we surveyed and agency officials we interviewed reported using working group products or information. For example, a PIO reported on the survey that she used the Goal Setting group’s guide on developing priority goals. According to representatives of small agencies, PIC and OMB staff effectively coordinated with them and were receptive to their feedback. GPRAMA directed the PIC to coordinate with nonmember agencies, which include most small agencies. According to a representative of the Small Agency Council (SAC), a management association of small agencies, PIC meetings generally addressed issues affecting the CFO Act agencies. She said that she understood the PIC’s focus, as larger agencies generally have more consistency in their implementation of GPRAMA requirements. However, she further stated that smaller agencies may require more assistance due to their more diverse missions and fewer resources. Although the broader PIC meetings were open to all agencies, the SAC representative told us that few small agencies found that the PIC met their needs. Instead, PIC and OMB staff communicated with small agencies through the SAC’s Performance Improvement Committee, which was established in March 2011. This committee is similar to the PIC in its attention to implementation of GPRAMA, but meeting objectives focus on issues facing small agencies. According to SAC representatives, the committee functions as a way for small agencies to give voice to their concerns, as well as a forum for OMB and the PIC to focus on the needs of small agencies, which may have unique issues in implementing GPRAMA and other requirements. SAC representatives told us that they have been satisfied with the support provided by OMB and the PIC in both of these areas. SAC Performance Improvement Committee meeting agendas from recent months included presentations from PIC and OMB staff. For example, a recent meeting included presentations from OMB staff on updated guidance contained in OMB’s Circular A-11 and on small agencies’ use of the performance.gov website. In addition to coordinating with non-member agencies, GPRAMA directed the PIC to coordinate with other interagency management councils. While two PIOs we surveyed reported that this coordination was very helpful, most (16) rated this function as moderately helpful. GPRAMA and OMB guidance specify the PIC’s functions and roles, but the PIC has not regularly assessed how well it has been fulfilling these roles. GPRAMA directed the PIC to perform several functions, such as helping agencies share practices that have led to performance improvements. According to the PIC’s Executive Director, the PIC conducted a survey of PIOs prior to the enactment of GPRAMA. It also surveyed attendees of a January 16, 2013, PIC meeting, one of the broader meetings that is open to PIOs as well as other agency staff. The survey covered topics such as participants’ expectations for the meeting and assessments of the usefulness of the agenda items covered. However, the PIC has not done this on a regular basis, or gathered member feedback about its overall performance. As we have previously reported, practices that help to sustain collaboration include having federal agencies create the means to monitor and evaluate their collaborative efforts to enable them to identify areas for improvement. Although our survey indicated that PIOs generally found the PIC’s work helpful in selected areas, without more comprehensive and regular assessment of member opinions, it will be difficult for the PIC to ensure that, going forward, it is meeting its members’ current and emerging needs. The PIC’s Executive Director, who started in this position in November 2012, told us that he was considering conducting a survey of PIC members as well as administering evaluation forms at the end of every meeting. Regularly soliciting feedback allows organizations to monitor member input on an ongoing basis. For example, the SAC Performance Improvement Committee administers an evaluation form at each of its meetings. These forms allow members to rate the usefulness of the meeting and their satisfaction with particular aspects of it and to suggest topics for upcoming meetings. Without formal and regular member feedback, the PIC is missing opportunities to tap a resource for identifying topics for future working groups and PIC meetings. PIC staff told us that identification of working group topics and meeting agenda items was generally based on informal input. Our review of PIC meeting agendas from February 2009 through September 2012 showed that since the enactment of GPRAMA in January 2011, both the “principals only” and broader PIC meetings focused on issues related to GPRAMA implementation. Going forward, PIC members’ needs will naturally evolve as GPRAMA implementation deepens within agencies and new questions and issues arise. The PIC’s Executive Director told us that the topic areas covered by the PIC in the future will most likely include new issues related to GPRAMA implementation, and an increased emphasis on cross-agency connections. Additionally, the PIC has not updated its strategic plan since GPRAMA was enacted in January 2011. OMB staff provided us with a copy of the PIC’s Strategic Action Plan, which was implemented in January 2009 and covered fiscal years 2009 through 2013. This plan included four strategic goals, along with objectives and implementing strategies for each. OMB staff also provided us with information from a 2010 update of the strategic plan that focused on two new PIC goals. As we have previously reported, practices that help ensure effective collaboration include the use of strategic plans as tools to drive collaboration and establish goals and strategies for achieving results. Our prior work also identified several leading practices in federal strategic planning, among them that organizations involve stakeholders in strategic planning. Although the PIC has a strategic plan in place, the PIC last updated the plan prior to the enactment of GPRAMA. An up-to-date strategic plan that incorporates the input of its members and reflects the changes in federal performance management required by GPRAMA could help the PIC be reasonably assured that it has established a framework to effectively guide and assess its work. The PIC’s Executive Director told us that he intended to work with the PIC to update the strategic plan, which will be informed by PIC member feedback. Senior agency officials’ commitment to and accountability for improving performance are important factors in determining the success of performance and management improvement initiatives. Through our PIO survey, we found that officials with responsibilities under GPRAMA were greatly involved in central, key aspects of performance management. These officials were supported by performance management staff, and PIOs reported that they were generally satisfied with their staff skills. However, our survey results showed that PIOs believed that certain competencies could be strengthened. OPM planned to directly assess performance management competency gaps at agencies, but has not yet done so. An assessment directly focused on performance management competencies could provide information on any gaps and inform agencies’ efforts to address them. OPM could also use this information to ensure that its work with agencies to incorporate competencies into training for agency staff, as required by GPRAMA, is effective. In particular, this information could inform OPM’s coordination of the sharing of training resources among agencies, both through its Training and Development Policy Wiki website and the website planned for performance management professionals. Through these websites, OPM could target resources to areas in which it has identified competency gaps. Such sharing of agency training resources offers the opportunity to maximize efficiency, and OPM is well positioned to play a coordinating role in this area through its expertise and relationships with the PIC and CLO Council. OPM has worked with these councils related to training under GPRAMA in the past. Additionally, the councils include representatives from across government with expertise in performance management and training, so they provide OPM with the ability to efficiently obtain input and share resources on performance management training. The PIC plays a significant role in agency implementation of GPRAMA, and our survey results indicate that PIOs generally found the PIC helpful to their agencies. Both GPRAMA and related OMB guidance described the PIC’s functions, but the PIC has not regularly collected member feedback on its own performance, and has not updated its strategic plan since GPRAMA was enacted in January 2011. Functions such as creating working groups and developing meeting agendas have been based on informal feedback. Regularly collecting formal feedback from members, such as through a survey, would help the PIC identify areas in which it could ensure it maintains its usefulness, as well as new areas on which member agencies would like for it to focus its meetings and working groups. Obtaining such feedback would allow the PIC to monitor its performance and identify issues as they arise. This will be particularly important as agencies become more accustomed to GPRAMA processes and their needs change. In addition, the PIC lacks an up-to-date strategic plan. Its most recent update was in 2010, so it does not reflect any changes in goals or priorities that may have resulted from GPRAMA. An up-to-date plan could provide the PIC with a basis for directing and evaluating its performance in implementing GPRAMA. A strategic plan that incorporates input from PIC members could also serve as a tool for encouraging collaboration and reinforcing accountability. To improve performance management staff capacity to support performance management in federal agencies, we recommend that the Director of OPM, in coordination with the PIC and the CLO Council, work with agencies to take the following three actions: Identify competency areas needing improvement within agencies. Identify agency training that focuses on needed performance management competencies. Share information about available agency training on competency areas needing improvement. To ensure that the PIC has a clear plan for accomplishing its goals and evaluating its progress, we recommend that the Director of OMB work with the PIC to take the following two actions: Conduct formal feedback on the performance of the PIC from member agencies, on an ongoing basis. Update its strategic plan and review the PIC’s goals, measures, and strategies for achieving performance, and revise them if appropriate. We provided a draft of this report to the Acting Director of OMB, Director of OPM, Secretary of HHS, and Director of NSF for review and comment. OMB staff agreed with our recommendation that it work with the PIC to conduct regular feedback on the PIC’s performance, and update the PIC’s strategic plan and review the PIC’s goals, measures, and strategies for achieving performance. The staff also provided technical comments, which we incorporated as appropriate. OPM agreed with our recommendation that it identify competency areas needing improvement in agencies, and use this information to identify and share information about training that focuses on needed performance management competencies. OPM explained that it will work with agencies, and in particular with PIOs, to assess the competencies of the performance management workforce. OPM also stated that it will support the use of the PIC’s performance learning website to facilitate the identification and sharing of training related to competencies in need of improvement. OPM’s written comments are reprinted in appendix V. HHS did not have comments. NSF provided technical comments, which we incorporated. We are sending copies of this report to the Acting Directors of OMB, OPM, and NSF, and the Secretary of HHS, as well as the appropriate congressional committees and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6806 or mihmj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. The Government Performance and Results Act Modernization Act of 2010 (GPRAMA) requires GAO to review the act’s implementation, and this report is part of a series of reviews planned around the requirement. The objectives of this report are to (1) examine the status of federal agency implementation of the performance management leadership roles under GPRAMA; and (2) evaluate the role of the PIC in facilitating the exchange of best practices and improving program management and performance. To achieve our objectives, we focused our review on the 24 agencies covered by the Chief Financial Officers Act of 1990 (CFO Act). Several provisions of GPRAMA apply specifically to these agencies, including that the Performance Improvement Council (PIC) include them as members. We focused our examination of performance management leadership on the management roles that have specific responsibilities under GPRAMA and related OMB guidance, with the exception of agency head. These are the chief operating officer (COO), performance improvement officer (PIO), deputy PIO, priority goal leader, and deputy goal leader. In looking at the PIC, we evaluated its role in facilitating the exchange of best practices and improving agency program management and performance. GPRAMA and related OMB guidance charge the PIC with performing several additional functions, such as supporting OMB in implementing requirements related to federal government priority goals, also referred to as cross-agency priority goals. We did not include these other functions in our review. To address both objectives, we conducted a survey of PIOs at the 24 CFO Act agencies. Through our survey, we collected information regarding PIOs’ and other key officials’ characteristics, their involvement in performance management under GPRAMA, and PIO and agency participation in the PIC. Appendix IV presents the survey questions we asked, and summarizes the responses we received. We received responses from all 24 PIOs (a 100 percent response rate). Selected results from our survey were also reported in another GAO report that focused on quarterly performance reviews under GPRAMA. We administered the web-based survey from October 18, 2012 to December 14, 2012. Respondents were sent an e-mail invitation to complete the survey on a GAO web server using a unique username and password. During the data collection period, we sent reminder e-mails and made phone calls to nonresponding agencies. Because this was not a sample survey, it has no sampling errors. The practical difficulties of conducting any survey may also introduce nonsampling errors, such as difficulties interpreting a particular question, which can introduce unwanted variability into the survey results. We took steps to minimize nonsampling errors by pretesting the questionnaire in person with PIOs and deputy PIOs at three different agencies. We conducted these pretests to make sure that the questions were clear and unbiased, the data and information were readily obtainable, and that the questionnaire did not place an undue burden on respondents. Additionally, a senior methodologist within our office independently reviewed a draft of the questionnaire prior to its administration. We made appropriate revisions to the content and format of the questionnaire after the pretests and independent review. All data analysis programs used to generate survey results were independently verified for accuracy. Additionally, in reviewing the answers from agencies, we confirmed that PIOs had correctly bypassed inapplicable questions (skip patterns). Based on our findings, we determined that the survey data were sufficiently reliable for the purposes of this report. In addition, in order to understand GPRAMA implementation in more detail and put survey results in context for both objectives, we conducted in-depth studies of two agencies’ implementation of performance management leadership roles under GPRAMA and participation in the PIC—the Department of Health and Human Services (HHS) and the National Science Foundation (NSF). We selected these two agencies because they have differing characteristics that may affect implementation, such as agency size and the career status of the official in the PIO role. HHS is a relatively large agency when ranked according to annual budget and number of staff, while NSF is a relatively small agency. Additionally, HHS’s PIO is a political appointee, while NSF’s PIO is a career civil servant. In making our selection, we excluded agencies with certain characteristics, including those with a PIO that was relatively new to the role at the time of our survey and agencies that had been the subject of recent case studies on performance management by us or other organizations. We conducted interviews with both selected agencies’ COOs, PIOs, and deputy PIOs. In addition, in order to understand the priority goal leader role and its contributions to performance management, we selected three of HHS’s six priority goals and two of NSF’s three priority goals and interviewed the responsible goal leaders. We selected these goals on the basis of several characteristics that may affect their management. These include: (1) number of priority goal leaders—we selected some goals with one leader and some with multiple leaders; (2) number of agency components involved in the goal—we selected goals with varying numbers of components and other stakeholders involved; (3) type of goal—we selected some process goals and some outcome goals; and (4) relationship to cross-agency priority goals—we included one goal in our set that relates to a cross-agency priority goal. The three goals we selected for HHS were: (1) improve the quality of early childhood education: (2) improve patient safety; and (3) reduce cigarette smoking. The two goals we selected for NSF were: (1) develop a diverse and highly qualified science and technology workforce; and (2) increase opportunities for research and education through public access to high- value digital products of NSF-funded research. For priority goals in which two leaders were assigned, we interviewed one of the responsible leaders. In several cases, deputy/ lieutenant goal leaders also attended the interviews. We also addressed our first objective by reviewing GPRAMA and OMB guidance related to the key management roles. We reviewed information provided to us by OMB on the officials in the roles, along with information on them that is publically available through OMB’s performance.gov website and agency websites. We also interviewed OMB staff and officials at OPM about their work under GPRAMA, including their work with agencies in implementing GPRAMA. To understand agencies’ perspectives and experiences in implementing the performance management leadership roles under GPRAMA, we analyzed relevant results from our survey of PIOs and included related questions in our interviews with officials at HHS and NSF. We also obtained relevant documentation, such as meeting agendas, from these two agencies. To address our second objective, we reviewed GPRAMA and related OMB guidance on the PIC. We analyzed PIC meeting agendas from February 2009 through September 2012, and we observed part of the September 12, 2012, PIC meeting. We also reviewed documents related to the PIC and its working groups and website, and interviewed OMB and PIC staff. We also interviewed OPM officials about their work with the PIC. To understand agencies’ participation in and use of the PIC, we analyzed relevant results from our survey of PIOs and included related questions in our interviews with officials at HHS and NSF. We also interviewed the PIC’s Executive Director, and officials from the Small Agency Council and the chair of its Performance Improvement Committee, which interacts with the PIC. We conducted our work from May 2012 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In a January 2012 memorandum, OPM’s Director identified 15 competencies that are essential for performance management staff to have in order to perform their roles. The memorandum included the following definitions of each competency. Accountability - Holds self and others accountable for measurable high- quality, timely, and cost-effective results. Determines objectives, sets priorities, and delegates work. Accepts responsibility for mistakes. Complies with established control systems and rules. Attention to Detail - Is thorough when performing work and conscientious about attending to detail. Customer Service - Works with clients and customers (that is, any individuals who use or receive the services or products that your work unit produces, including the general public, individuals who work in the agency, other agencies, or organizations outside the Government) to assess their needs, provide information or assistance, resolve their problems, or satisfy their expectations; knows about available products and services; is committed to providing quality products and services. Influencing/Negotiating - Persuades others; builds consensus through give and take; gains cooperation from others to obtain information and accomplish goals. Information Management - Identifies a need for and knows where or how to gather information; organizes and maintains information or information management systems. Oral Communication - Expresses information (for example, ideas or facts) to individuals or groups effectively, taking into account the audience and nature of the information (for example, technical, sensitive, controversial); makes clear and convincing oral presentations; listens to others, attends to nonverbal cues, and responds appropriately. Organizational Awareness - Knows the organization’s mission and functions, and how its social, political, and technological systems work and operates effectively within them; this includes the programs, policies, procedures, rules, and regulations of the organization. Organizational Performance Analysis - Knowledge of the methods, techniques, and tools used to analyze program, organizational, and mission performance; includes methods that deliver key performance information (for example, comparative, trend, diagnostic, root cause, predictive) used to inform decisions, actions, communications, and accountability systems. Partnering - Develops networks and builds alliances; collaborates across boundaries to build strategic relationships and achieve common goals. Performance Measurement - Knowledge of the principles and methods for evaluating program or organizational performance using financial and nonfinancial measures, including identification of evaluation factors (for example, workload, personnel requirements), metrics, and outcomes. Planning and Evaluating - Organizes work, sets priorities, and determines resource requirements; determines short- or long-term goals and strategies to achieve them; coordinates with other organizations or parts of the organization to accomplish goals; monitors progress and evaluates outcomes. Problem Solving - Identifies and analyzes problems; weighs relevance and accuracy of information; generates and evaluates alternative solutions; makes recommendations. Reasoning - Identifies rules, principles, or relationships that explain facts, data, or other information; analyzes information and makes correct inferences or draws accurate conclusions. Technical Competence - Uses knowledge that is acquired through formal training or extensive on-the-job experience to perform one’s job; works with, understands, and evaluates technical information related to the job; advises others on technical issues. Written Communication - Writes in a clear, concise, organized, and convincing manner for the intended audience. Please note: Please provide what you personally believe is the most correct answer to each question, even if it is different from the opinions of others at your agency. 1. Besides Performance Improvement Officer (PIO), what other title(s), if any, do you have (e.g. CFO)? Data for this question is intentionally not reported because it is difficult to summarize and/or could identify respondents. 2. When did you start serving in the position(s) you identified in Question 1? 3. Which of the following best describes your hiring status? 4. Who do you report to in the role(s) you identified in Question 1? (check all that apply) Number of respondents 24 4a. If other, please specify: Data for this question is intentionally not reported because it is difficult to summarize and/or could identify respondents. 5. When did you start serving as PIO at your agency? 6. Who do you report to in your role as PIO? (check all that apply) 1. Chief Operating Officer (i.e. Deputy Secretary or equivalent) 6a. If other, please specify: Data for this question is intentionally not reported because it is difficult to summarize and/or could identify respondents. 7. How many PIOs has your agency had (including yourself) since GPRAMA was enacted in January 2011? 8. On average, how much time per month do you spend performing duties related to your role as PIO? If other factors specified in question 10, what was each additional factor? Data for this question is intentionally not reported because it is difficult to summarize and/or could identify respondents. Data for this question is intentionally not reported because it is difficult to summarize and/or could identify respondents. 11. To what extent, if at all, are PIO responsibilities considered in your annual performance expectations and appraisals? Not applicable - no annual performance expectations or appraisals 1 12. What suggestions, if any, do you have for improving the effectiveness of the PIO role? Data for this question is intentionally not reported because it is difficult to summarize and/or could identify respondents. Deputy Performance Improvement Officer(s) 13. Does your agency have a Deputy Performance Improvement Officer(s)? Yes - we have 2 Deputy Performance Improvement Officers 2 14. When was the Deputy Performance Improvement Officer position created at your agency? 15. How many Deputy PIOs has your agency had (including the current person in that position) since GPRAMA was enacted in January 2011? Number of respondents 22 16. What other title(s), if any, does each Deputy Performance Improvement Officer have? Data for this question is intentionally not reported because it is difficult to summarize and/or could identify respondents. 17. Who does each Deputy Performance Improvement Officer report to in the role(s) identified in question 16? Data for this question is intentionally not reported because it is difficult to summarize and/or could identify respondents. 18. On average, how much time per month does each Deputy PIO spend performing duties related to his/her role as PIO? Number of DPIOs Listed by Agencies 24 19. When did the Deputy Performance Improvement Officer start in his/her position? Number of DPIOs Listed by Agencies 24 20. To what extent, if at all, are Deputy PIO responsibilities specifically considered in his/her annual performance expectations and appraisals? Not applicable - no annual performance expectations or appraisals 21. What suggestions, if any, do you have for improving the effectiveness of the Deputy PIO role? Data for this question is intentionally not reported because it is difficult to summarize and/or could identify respondents. See Appendix IV for explanations of competencies. Number of respondents 24 23. What are your plans, if any, to strengthen staff competencies? Not applicable - competencies are not sufficiently available, but no action is planned 1 23a. If other, please specify: Data for this question is intentionally not reported because it is difficult to summarize and/or could identify respondents. GPRAMA provides senior agency officials with specific duties and responsibilities related to performance management and achievement of performance goals. The key agency officials identified in the Act are Agency Head, Chief Operating Officer (COO) who is the Deputy Secretary or equivalent position; Performance Improvement Officer (PIO); and Goal Leader. Later guidance from the Office of Management and Budget (OMB) directs agencies that have a political appointee serving as the PIO to appoint a career senior executive to serve as Deputy PIO. The following questions address these roles and their responsibilities at your agency. 24. How much involvement, if any, does each of the following officials have in strategic and performance planning and goal setting? No opinion Not applicable 2 0 No opinion Not applicable 0 0 25. How much involvement, if any, does each of the following officials have in performance measurement and analysis? No opinion Not applicable 2 0 No opinion Not applicable 0 0 26. How much involvement, if any, does each of the following officials have in communicating agency progress toward goals, both internally and externally? As of June 2011, GPRAMA required agencies to review progress toward its priority goals on at least a quarterly basis. The quarterly performance reviews are to involve key leadership and other relevant parties and should, at minimum, focus on the agency's priority goals. They are to include reviewing progress and trends, coordinating within and outside the agency, assessing agencies’, activities', and policies’' contributions to goals, categorizing goals by risk, and identifying strategies for improvement. When we refer to "quarterly performance reviews" in the following questions, we refer to all aspects of the regularly-scheduled reviews required under GPRAMA, including preparation, review, and follow-up. Some agencies refer to these reviews as "stat" reviews. Additionally, although we refer to them as "quarterly performance reviews," agencies may conduct these reviews on a regularly occurring basis more frequently than quarterly. 27. Does your agency conduct GPRAMA-required quarterly performance reviews? 28. Did your agency conduct quarterly performance reviews (or similar reviews) before the GPRAMA requirement took effect in June 2011? 29. When did your agency begin conducting quarterly reviews? Two PIOs did not respond to this question. 30. How does your agency conduct its quarterly performance reviews? 31. How often does your agency conduct performance reviews (although GPRAMA requires quarterly reviews, some agencies have established other review cycles to meet their management needs)? Number of respondents 24 31a. How often? Data for this question is intentionally not reported because it is difficult to summarize and/or could identify respondents. 32. How much involvement, if any, does each of the following officials have in your agency's quarterly performance reviews? No opinion Not applicable 4 0 33. For each of the following officials, has their involvement in agency performance management increased, remained about the same, or decreased as a result of your agency's quarterly performance reviews? GPRAMA establishes in law the Performance Improvement Council (PIC). The PIC is an interagency council made up of agency PIOs that is charged with assisting OMB with topics related to GPRAMA and facilitating the exchange of useful practices among agencies. 36. How often do you attend the every-other-month Performance Improvement Council meetings that are for PIOs only (not deputies or staff)? Rarely or never attend meetings 2 37. How often do you attend the every-other-month Performance Improvement Council meetings that are open to PIOs, Deputy PIOs, and staff? Rarely or never attend meetings 12 38. How often does a Deputy PIO or another representative(s) from your agency attend the every-other-month Performance Improvement Council meetings that are open to PIOs, Deputy PIOs, and staff? Rarely or never attend meetings 1 39. Which of the following working groups of the Performance Improvement Council do/did you actively participate in personally as the PIO? (check all that apply) Number of respondents 24 3. Internal Agency Reviews working group 6. I do not personally participate in any working groups 39a. If other working group, please specify: Data for this question is intentionally not reported because it is difficult to summarize and/or could identify respondents. 40. In which of the following working groups of the Performance Improvement Council do/did other representatives from your agency actively participate? (check all that apply) If other aspect(s) specified in question 41, what was each additional aspect? Data for this question is intentionally not reported because it is difficult to summarize and/or could identify respondents. Data for this question is intentionally not reported because it is difficult to summarize and/or could identify respondents. 42. To what extent are you able to apply successful practices and other information and tools shared by the PIC to your agency's performance management? 42a. Please provide an example or examples of case(s) in which you have applied successful practices and other information and tools shared by the PIC in your agency. Data for this question is intentionally not reported because it is difficult to summarize and/or could identify respondents. 43. What suggestions do you have for improving the effectiveness of the PIC, if any? Data for this question is intentionally not reported because it is difficult to summarize and/or could identify respondents. 44. Please provide any comments that would expand upon your responses to any of the questions in the survey. Data for this question is intentionally not reported because it is difficult to summarize and/or could identify respondents. In addition to the contact named above, Sarah Veale, Assistant Director, and Kathleen Padulchick, Analyst-in-Charge, supervised the development of this report. Virginia Chanley, Lois Hanshaw, Linda Kohn, Jill Lacey, Albert Sim, and Meredith Trauner made significant contributions to all aspects of this report. | The performance of federal agencies is central to delivering meaningful results to the American public. GPRAMA, along with related guidance, assigned responsibilities for managing performance to key officials. It also provided a statutory basis for the existing PIC, a council made up of agency PIOs that is tasked with assisting OMB with topics related to GPRAMA. GPRAMA directed GAO to report on the act's implementation. This report, one of a series under that mandate, (1) examines the status of federal agencies' implementation of the performance management leadership roles under GPRAMA and (2) evaluates the role of the PIC in facilitating the exchange of best practices and improving agency program management and performance. To address both objectives, GAO conducted a survey of PIOs at all 24 CFO Act federal agencies, as well as in-depth case studies of HHS and NSF, which were selected because they have differing characteristics such as size. GAO also interviewed and obtained documents from OMB staff and OPM officials. The designation of senior-level officials to key performance management roles with responsibilities under the Government Performance and Results Act Modernization Act of 2010 (GPRAMA) has helped elevate accountability for performance management within federal agencies and ensure high-level involvement, according to officials GAO interviewed. The 24 Chief Financial Officers (CFO) Act agencies have all assigned officials to the key management roles--chief operating officer, performance improvement officer (PIO), and goal leader--required under GPRAMA, according to the Office of Management and Budget (OMB) and the results of GAO's PIO survey. PIOs GAO surveyed reported that most key officials were greatly involved in central aspects of performance management, such as agency quarterly performance reviews. PIOs GAO surveyed, and priority goal leaders GAO interviewed at the Department of Health and Human Services (HHS) and the National Science Foundation (NSF), reported they were supported in their responsibilities by their deputies and other staff. PIOs generally reported that their staff had competencies identified as relevant by the Office of Personnel Management (OPM), such as reasoning, to a large extent, although PIOs reported that the competencies were not as widespread among their staff as the other competencies. OPM has taken steps to work with agencies to incorporate performance management staff competencies into training. For example, OPM is working with the Performance Improvement Council (PIC) to develop a website that will include such training. However, at this time, it does not plan to assess competency gaps among agency performance management staff to inform its work. Without this information, it will be difficult for OPM, working with the PIC, to focus on the most-needed resources and facilitate their use by other agencies. PIOs generally found that sharing of best practices and development of tips and tools are the most helpful aspects of the PIC, and reported strong agency attendance at meetings and participation in working groups. However, the PIC has not regularly collected member feedback about its performance. Additionally, although the PIC has a strategic plan in place, it has not updated it since GPRAMA was enacted. Routine member feedback and an updated strategic plan that reflects changes required by GPRAMA could help increase the PIC's effectiveness. Without these assessment tools, the PIC lacks an important basis and means for directing and evaluating its performance. GAO recommends that the Director of OPM work with the PIC to identify competency gaps for agency performance management staff and use this information to identify and share relevant agency training. GAO also recommends that the Director of OMB work with the PIC to gather regular feedback from members on its performance and update its strategic plan. OPM and OMB staff agreed with these recommendations. |
Conducting the decennial census is a major undertaking involving many interrelated steps including identifying and correcting addresses for all known living quarters in the United States (known as “address canvassing”); sending questionnaires to housing units; following up with nonrespondents through personal interviews; identifying people with nontraditional living arrangements; managing a voluminous workforce responsible for follow-up activities; collecting census data by means of questionnaires, calls, and personal interviews; tabulating and summarizing census data; and disseminating census analytical results to the public. The Bureau estimates that it will spend about $3 billion on automation and IT for the 2010 Census, including four major systems acquisitions that are expected to play a critical role in improving coverage, accuracy, and efficiency. Figure 1 shows the key systems and interfaces supporting the 2010 Census, and highlights the four major IT systems we discuss today. As the figure shows, these four systems are to play important roles with regard to different aspects of the process. To establish where to count (as shown in the top section of fig. 1), the Bureau will depend heavily on a database that provides address lists, maps, and other geographic support services. The Bureau’s address list, known as the Master Address File (MAF), is associated with a geographic information system containing street maps known as the Topologically Integrated Geographic Encoding and Referencing (TIGER®) database. The MAF/TIGER database is the object of the first major IT acquisition— the MAF/TIGER Accuracy Improvement Project (MTAIP). To collect respondent information (a process depicted in the middle section of fig. 1), the Bureau is pursuing two initiatives. First, the Field Data Collection Automation (FDCA) program is expected to provide automation support for field data collection operations as well as reduce costs and improve data quality and operational efficiency. This acquisition includes the systems, equipment, and infrastructure that field staff will use to collect census data, such as handheld mobile computing devices. Second, the Decennial Response Integration System (DRIS) is to provide a system for collecting and integrating census responses from all sources, including forms, telephone interviews, and handheld mobile computing devices in the field. DRIS is expected to improve accuracy and timeliness by standardizing the response data and providing it to other Bureau systems for analysis and processing. To provide results (see the bottom section of fig. 1), the Data Access and Dissemination System II (DADS II) acquisition is to replace legacy systems for tabulating and publicly disseminating data. The DADS II program is expected to provide comprehensive support to DADS. Replacement of the legacy systems is expected to maximize the efficiency, timeliness, and accuracy of tabulation and dissemination products and services; minimize the cost of tabulation and dissemination; and increase user satisfaction with related services. Table 1 provides a brief overview of the four acquisitions. Responsibility for these acquisitions lies with the Bureau’s Decennial Management Division and the Geography Division. Each of the four acquisitions is managed by an individual project team staffed by Bureau personnel. Additional information on the contracts for these four systems is provided in appendix I of the report. In preparation for the 2010 Census, the Bureau plans a series of tests of its (new and existing) operations and systems in different environments, as well as to conduct what it refers to as the Dress Rehearsal. During the Dress Rehearsal period, which runs from February 2006 through June 2009, the Bureau plans to conduct development and testing of systems, run a mock Census Day, and prepare for Census 2010, which will include opening offices and hiring staff. These Dress Rehearsal activities are to provide an operational test of the available system functionalities in a census-like environment, as well as other operational and procedural activities. As of October 2007, three key decennial systems acquisitions were in process and a fourth contract had recently been awarded. The ongoing acquisitions (FDCA, DRIS) showed mixed progress in providing deliverables while adhering to planned schedules and cost estimates. The two ongoing projects had experienced schedule delays; the date for awarding the fourth contract was postponed several times. In addition, we estimated that one of the ongoing projects (FDCA) will incur about $18 million in cost overruns. In response to schedule delays as well as other factors, including cost, the Bureau made schedule adjustments and planned to delay certain system functionality. As a result, Dress Rehearsal operational testing will not address the full complement of systems and functionality that was originally planned, and the Bureau has not yet finalized its plans for further system tests. Delaying functionality increases the importance of operational testing after the Dress Rehearsal to ensure that the decennial systems work as intended. MTAIP is a project to improve the accuracy of the MAF/TIGER database, which contains information on street locations, housing units, rivers, railroads, and other geographic features. We reported that MTAIP was on schedule to complete improvements by the end of fiscal year 2008 and was meeting cost estimates. As of October 2007, the acquisition was in the second and final phase of its life cycle. In Phase II, which began in January 2003 and is ongoing, the contractor is developing improved maps for all 3,037 counties in the United States. We reported that the contractor had delivered more than 75 percent of these maps, which are due by September 2008. Beginning in fiscal year 2008, maintenance for the contract will begin. The contract closeout activities are scheduled for fiscal year 2009. FDCA is to provide the systems, equipment, and infrastructure that field staff will use to collect census data. At the peak of the 2010 Census, about 4,000 field operations supervisors, 40,000 crew leaders, 500,000 enumerators and address listers, and several thousand office employees are expected to use or access FDCA. As of October 2007, the contractor was in the process of developing and testing FDCA software for the Dress Rehearsal Census Day, and had delivered 1,388 handheld mobile computing devices to be used in address canvassing for the Dress Rehearsal. Also, key FDCA support infrastructure had been installed, including the Security Operation Center. In future contract phases, the project will continue development, deploy systems and hardware, support census operations, and perform operational and contract closeout activities. However, the Bureau revised FDCA’s original schedule and delayed or eliminated some of its key functionality from the Dress Rehearsal, including the automated software distribution system. According to the Bureau, it revised the schedule because it realized it had underestimated the costs for the early stages of the contract, and that it could not meet the contractor’s estimated level of first-year funding because the fiscal year 2006 budget was already in place. According to the Bureau, this initial underestimate led to schedule changes and overall cost increases. According to the Bureau, FDCA was meeting all planned milestones on the revised schedule. For example, all sites for Regional Census Centers and Puerto Rico Area Offices had been identified. According to the Bureau, it is on schedule to open all these offices in January 2008. The project life-cycle costs had increased. At contract award in March 2006, the total cost of FDCA was estimated not to exceed $596 million. In May 2007, the life-cycle cost rose by a further $23 million because of increasing system requirements, which resulted in an estimated life-cycle cost of about $647 million. Table 2 shows the life-cycle cost estimates for FDCA as of October 2007. In addition, FDCA had already experienced $6 million in cost overruns, and both our analysis and the contractor’s analysis expected FDCA to experience additional cost overruns. Based on our analysis of cost performance reports (from July 2006 to May 2007), we projected that the FDCA project will experience further cost overruns by December 2008. The FDCA cost overrun was estimated between $15 million and $19 million, with the most likely overrun to be about $18 million. The contractor, in contrast, estimated about a $6 million overrun by December 2008. According to the contractor, the major cause of projected cost overruns was the system requirements definition process. For example, in December 2006, the contractor noted a significant increase in the requirements for the Dress Rehearsal Paper Based Operations in Execution Period 1. According to the cost performance reports, this increase has meant that more work must be conducted and more staffing assigned to meet the Dress Rehearsal schedule. The Bureau agreed that cost increases occurred in some cases because of the addition of new requirements, most of which related to the security of IT systems, but added that in other cases, increases occurred from the process of the contractor converting high-level functional requirements into more detailed specific requirements. However, the process of developing detailed requirements from high-level functional requirements does not inevitably lead to cost increases if the functional requirements were initially well-defined. The FDCA schedule changes have increased the likelihood that the systems testing at the Dress Rehearsal will not be as comprehensive as planned. The inability to perform comprehensive operational testing of all interrelated systems increases the risk that further cost overruns will occur and that decennial systems will experience performance shortfalls. DRIS is to provide a system for collecting and integrating census responses, standardizing the response data, and providing it to other systems for analysis and processing. The DRIS functionality is critical for providing assistance to the public via telephone and for monitoring the quality and status of data capture operations. Although DRIS was currently on schedule to meet its December 2007 milestone, the Bureau revised the original DRIS schedule after the contract was awarded in October 2005. Under the revised schedule, the Bureau delayed or eliminated some functionality that was expected to be ready for the Dress Rehearsal mock Census Day. According to Bureau officials, they delayed the schedule and eliminated functionality for DRIS when they realized they had underestimated the fiscal years 2006 through 2008 costs for development. As shown in table 3, the government’s funding estimates for DRIS Phase I were significantly lower than the contractor’s. Originally, the DRIS solution was to include paper, telephone, Internet, and field data collection processing; selection of data capture sites; and preparation and processing of 2010 Census forms. However, the Bureau reduced the scope of the solution by eliminating the Internet functionality. In addition, the Bureau has stated that it will not have a robust telephone questionnaire assistance system in place for the Dress Rehearsal. As of October 2007, the Bureau was also delaying selecting sites for data capture centers, preparing data capture facilities, and recruiting and hiring data capture staff. Although Bureau officials told us that the revisions to the schedule should not affect meeting milestones for the 2010 Census, the delays mean that more systems development and testing will need to be accomplished later. Given the immovable deadline of the decennial census, the Bureau is at risk of reducing functionality or increasing costs to meet its schedule. The DRIS project was not experiencing cost overruns, and our analysis of cost performance reports from April 2006 to May 2007 projected no cost overruns by December 2008. As of May 2007, the DRIS contract value had not increased. The DADS II acquisition is to replace the legacy DADS systems, which tabulate and publicly disseminate data from the decennial census and other Bureau surveys. The DADS II contractor is also expected to provide comprehensive support to the Census 2000 legacy DADS systems. The DADS II contract award date had been delayed multiple times. The award date was originally planned for the fourth quarter of 2005, but the date changed to August 2006. On March 8, 2006, the Bureau estimated it would delay the award of the DADS II contract from August to October 2006 to gain a clearer sense of budget priorities before initiating the request for proposal process. The Bureau then delayed the contract award again by about another year. In January 2007, the Bureau released the DADS II request for proposal, and the contract was finally awarded in September 2007. Because of these delays, DADS II will not be developed in time for the Dress Rehearsal. Instead, the Bureau will use the legacy DADS system for tabulation during the Dress Rehearsal. Nonetheless, the Bureau plans to have the DADS II system available for the 2010 Census. Operational testing helps verify that systems function as intended in an operational environment. However, for operational system testing to be comprehensive, system functionality must be completed. Further, for multiple interrelated systems, end-to-end testing is performed to verify that all interrelated systems, including any external systems with which they interface, are tested in an operational environment. However, as described above, two of the projects had delayed planned functionality to later phases, and one project contract had just recently been awarded in September 2007. As a result, the operational testing that is to occur during the Dress Rehearsal period around April 1, 2008, will not include tests of the full complement of decennial census systems and their functionality. As of October 2007, the Bureau had not yet finalized its plans for system tests. If further delays occur, the importance of these system tests will increase. Delaying functionality and not testing the full complement of systems increases the risk that costs will rise further, that decennial systems will not perform as expected, or both. The project teams varied in the extent to which they followed disciplined risk management practices. For example, three of the four project teams had developed strategies to identify the scope of the risk management effort. However, three project teams had weaknesses in identifying risks, establishing adequate mitigation plans, and reporting risk status to executive-level officials. These weaknesses in completing key risk management activities can be attributed in part to the absence of Bureau policies for managing major acquisitions, as we described in an earlier report. Without effective risk management practices, the likelihood of project success is decreased. According to the Software Engineering Institute (SEI), the purpose of risk management is to identify potential problems before they occur. When problems are identified, risk-handling activities can be planned and invoked as needed across the life of a project in order to mitigate adverse impacts on objectives. Effective risk management involves early and aggressive risk identification through the collaboration and involvement of relevant stakeholders. Based on SEI’s Capability Maturity Model® Integration (CMMI®), risk management activities can be divided into four key areas preparing for risk management, identifying and analyzing risks, mitigating risks, and executive oversight. The discipline of risk management is important to help ensure that projects are delivered on time, within budget, and with the promised functionality. It is especially important for the 2010 Census, given the immovable deadline. Risk preparation involves establishing and maintaining a strategy for identifying, analyzing, and mitigating risks. The risk management strategy addresses the specific actions and management approach used to perform and control the risk management program. It also includes identifying and involving relevant stakeholders in the risk management process. Table 4 shows the status of the four project teams’ implementation of key risk preparation activities as of October 2007. As the table shows, three project teams had established most of the risk management preparation activities. However, the MTAIP project team had implemented the fewest practices. The team did not adequately determine risk sources and categories or adequately develop a strategy for risk management. As a result, the project’s risk management strategy was not comprehensive and did not fully address the scope of the risk management effort, including discussing techniques for risk mitigation and defining adequate risk sources and categories. In addition, three project teams (MTAIP, FDCA, and DADS II) had weaknesses regarding stakeholder involvement. The three teams did not provide sufficient evidence that the relevant stakeholders were involved in risk identification, analysis, and mitigation activities; reviewing the risk management strategy and risk mitigation plans; or communicating and reporting risk management status. These weaknesses can be attributed in part to the absence of Bureau policies for managing major acquisitions, as we described in our earlier reports. Without adequate preparation for risk management, including establishing an effective risk management strategy and identifying and involving relevant stakeholders, project teams cannot properly control the risk management process. Risks must be identified and described in an understandable way before they can be analyzed and managed properly. This includes identifying risks from both internal and external sources and evaluating each risk to determine its likelihood and consequences. Table 5 shows the status of the four project teams’ implementation of key risk identification and evaluation activities at the time of our October 2007 report. As of July 2007, the MTAIP and DRIS project teams were adequately identifying and documenting risks, including system interface risks. For example, the MTAIP project team identified significant risks regarding potential changes in funding and the turnover of contractor personnel as the program nears maturity, and the DRIS project team identified significant risks regarding new system security regulations, changes or increases to Phase II baseline requirements, and new interfaces after Dress Rehearsal. In contrast, the FDCA project team had not identified or documented any significant risks related to the handheld computers that will be used in the 2010 Census, despite problems arising during the Dress Rehearsal. The computers are designed to automate operations for field staff and eliminate the need to print millions of paper questionnaires and maps used by temporary field staff to conduct address canvassing and nonresponse follow-up. Automating operations may allow the Bureau to reduce the cost of operations; thus, it is critical that the risks surrounding the use of the handheld computers be closely monitored and effectively managed to ensure their success. However, the Bureau has not identified or documented risks associated with a variety of handheld computers performance problems that we identified through field work conducted at your request. Specifically, we found that during Dress Rehearsal activities between May 2007 and June 2007, as the Bureau tested a prototype of the handheld computers, field staff experienced multiple problems. For example, the field staff told us that they experienced slow and inconsistent data transmissions from the handheld computers to the central data processing center. The field staff reported the device was slow to process addresses that were a part of a large assignment area. Bureau staff reported similar problems with the handheld computers in observation reports, help desk calls, and debriefing reports. In addition, our own analysis of Bureau documentation revealed problems with the handheld computers: Bureau observation reports revealed that the Bureau most frequently observed problems with slow processing of addresses, large assignment areas, and transmission. The help desk call log revealed that field staff most frequently reported issues with transmission, the device freezing, mapspotting and assignment areas. Debriefing reports illustrated the impact of the handheld mobile computing problems on address canvassing. For example, one participant commented that the field staff struggled to find solutions to problems and wasted precious time in replacing the devices. A time-and-motion study conducted by the Census Bureau indicated that field staff reported significant downtime in two test locations— about 23 percent in one location and about 27 percent in another location. The study, which is a draft that is subject to change, also described occurrences of failed transmissions and field staff attempts to resolve transmission problems. Collectively, the observation reports, help desk calls, debriefing reports, and time-and-motion study raised serious questions about the performance of the handheld computers during the address canvassing operation. According to the Bureau, the contractor has used these indicators to identify and address underlying problems during the Dress Rehearsal. Still, the magnitude of handheld computers performance issues throughout the Dress Rehearsal remains unclear. For example, the Bureau received analyses from the contractor on average transmission times. However, the contractor has not provided analyses that show the full range of transmission times, nor how this may have changed throughout the entire operation. In addition, the Bureau has not fully specified how it will measure performance of the handheld computers, even though the FDCA contract anticipates the Bureau’s need for data on the performance of the handheld computers. The FDCA contract outlines the type of data the contractor will provide the Bureau on the performance of the handheld computers. Specifically, sections of the FDCA contract require the handheld computers to have a transmission log with what was transmitted, the date, time, user, destination, content/data type, and the outcome status. Another section of the Bureau’s FDCA contract states that the FDCA contractor shall provide near real time reporting and monitoring of performance metrics and a “control panel/dash board” application to visually report those metrics from any Internet enabled PC. However, the contractor and the Bureau are not using a dashboard for Dress Rehearsal activities. Rather, during the Dress Rehearsal, the Bureau plans to identify what data and performance they would need for tracking the performance of the handheld computers in 2010 operations. In order for the Bureau to ensure that the FDCA handheld computers are ready for full scale operations, it will have to identify risks on a tight time frame. We recommended in a report on the Bureau’s earlier version of the handheld computers that the Bureau define specific, measurable performance requirements for the handheld computer and other census- taking activities that address such important measures as productivity, cost savings, reliability, durability, and that the Bureau test the device’s ability to meet those requirements in 2006.We also recommended in a March 2006 testimony that the Bureau validate and approve FDCA baseline requirements. The Bureau is working within a compressed time frame. By law, the decennial census must occur on April 1, 2010, and the results must be submitted to the President in December 2010. These dates cannot be altered, even if preparations are delayed. Access to real-time performance metrics via a “control panel/dash board” would assist Bureau management in assessing the handheld computer’s performance and maximize the amount of time the Bureau and the contractor would have to remedy any problems identified during operations. Further, the Bureau’s tight 2010 Decennial Operations Schedule allows little time for fixing problems with the device, raising the importance of the Bureau’s access to these performance indicators. Such data would help fully inform stakeholders of the risks associated with the handheld computer, and allow project teams to develop mitigation activities to help avoid, reduce, and control the probability of these risks occurring. Finally, the FDCA and DADSII project teams did not provide evidence that specific system interface risks are being adequately identified to ensure that risk handling activities will be invoked should the systems fail during 2010 Census. For example, although the DADS II will not be available for the Dress Rehearsal, the project team did not identify any significant interface risks associated with this system. One reason for these weaknesses, as mentioned earlier, is the lack of Bureau policies for managing major acquisitions. If risks are not adequately identified and analyzed, management may be prevented from monitoring and tracking risks, and taking the appropriate mitigation actions, increasing the probability that the risks will materialize and magnifying the extent of damage incurred in such an event. Risk mitigation involves developing alternative courses of action, workarounds, and fallback positions, with a recommended course of action for the most important risks to the project. Mitigation includes techniques and methods used to avoid, reduce, and control the probability of occurrence of the risk; the extent of damage incurred should the risk occur; or both. Table 6 shows the status of the four project teams’ implementation of key risk mitigation activities. Three project teams (MTAIP, FDCA, and DADS II) had developed mitigation plans that were often untimely or included incomplete activities and milestones for addressing the risks. Some of these untimely and incomplete activities and milestones included the following: The FDCA project team had developed mitigation plans for the most significant risks, but the plans did not always identify milestones for implementing mitigation activities. Moreover, the plans did not identify any commitment of resources, several did not establish a period of performance, and the team did not always update the plans with the latest information on the status of the risk. In addition, the FDCA project team did not provide evidence of developing mitigation plans to handle the other significant risks as described in their risk mitigation strategy. (These risks included a lack of consistency in requirements definition and insufficient FDCA project office staffing levels). The mitigation plans for DADS II were incomplete, with no associated future milestones and no evidence of continual progress in working towards mitigating a risk. In several instances, DADS II mitigation plans were listed as “To Be Determined.” With regard to the second practice in the table (periodically monitoring risk status and implementing mitigation plans), the MTAIP, FDCA, and DADS II project teams were not always implementing the mitigation plans as appropriate. For example, although the MTAIP project team has periodically monitored the status of risks, it mitigation plans do not include detailed action items with start dates and anticipated completion dates; thus, the plans do not ensure that mitigation activities are implemented appropriately and tracked to closure. The FDCA and DADS II project teams did not identify system interface risks nor prepare adequate mitigation plans to ensure that systems will operate as intended. Because they did not develop complete mitigation plans, the MTAIP, FDCA, and DADS II project teams cannot ensure that for a given risk, techniques and methods will be invoked to avoid, reduce, and control the probability of occurrence. Reviews of the project teams’ risk management activities, status, and results should be held on a periodic and event-driven basis. The reviews should include appropriate levels of management, such as key Bureau executives, who can provide visibility into the potential for project risk exposure and appropriate corrective actions. Table 7 shows the status of the four project teams’ implementation of activities for senior-level risk oversight at the time of our prior report. The project teams were inconsistent in reporting the status of risks to executive-level officials. DRIS and DADS II did regularly report risks; however, the FDCA and MTAIP projects did not provide sufficient evidence to document that these discussions occurred or what they covered. Failure to report a project’s risks to executive-level officials reduces the visibility of risks to executives who should be playing a role in mitigating them. To help ensure that the Bureau’s four key acquisitions for the 2010 Census operate as intended, we made several recommendations in our report. First, to ensure that the Bureau’s decennial systems are fully tested, we recommended that the Secretary of Commerce require the Director of the Census Bureau to direct the Decennial Management Division and Geography Division to plan for and perform end-to-end testing so that the full complement of systems are tested in a census-like environment. In written comments on a draft of our final report, the department disagreed with our findings that a full complement of systems would not be tested, stating it plans to do so during the Dress Rehearsal or later. Nonetheless, the Bureau’s test plans have not been finalized, and it remains unclear whether testing will address all interrelated systems and functionality in a census-like environment, as would be provided by end- to-end testing. Consistent with our recommendation following up with documented test plans to do end-to-end testing will help ensure that decennial systems will work as intended. Further, we recommended that the Secretary direct the Director of the Census Bureau to ensure that project teams strengthen risk management activities associated with risk identification, mitigation, and oversight. The department agreed to examine additional ways to manage risks and is working on an action plan to strengthen risk management activities. In summary, the IT acquisitions planned for 2010 Census will require continued oversight to ensure that they are achieved on schedule and at planned cost levels. Although, as of October 2007, the MTAIP and DRIS acquisitions were currently meeting cost estimates, FDCA was not. In addition, while the Bureau was making progress developing systems for the Dress Rehearsal, it was deferring certain functionality, with the result that the Dress Rehearsal operational testing would address less than a full complement of systems. Delaying functionality increases the importance of later development and testing activities, which will have to occur closer to the census date. It also raises the risk of cost increases, given the immovable deadline for conducting the 2010 Census. Further, the Bureau’s project teams for each of the four acquisitions had implemented many practices associated with establishing sound and capable risk management processes, but they were not always consistent: the teams had not always identified risks, developed complete risk mitigation plans, or briefed senior-level officials on risks and mitigation plans. At this stage, we are particularly concerned about managing the risks associated with the handheld mobile computing devices, the numerous systems interfaces, and the remaining systems testing. Regarding the handheld mobile computing devices, it is critical that performance of these devices is clearly specified, measured, and that deficiencies in performance is effectively addressed. Until the project teams and the Decennial Management Division implement appropriate risk management activities, they face an increased probability that decennial systems will not be delivered on schedule and within budget or perform as expected. Mr. Chairman and members of the subcommittee, this concludes our statement. We would be happy to respond to any questions that you or members of the subcommittee may have at this time. If you have any questions on matters discussed in this testimony, please contact David A. Powner at (202) 512-9286 or Mathew Scirè at (202) 512- 6806 or by e-mail at pownerd@gao.gov or sciremj@gao.gov. Other key contributors to this testimony include Mathew Bader, Thomas Beall, Jeffrey DeMarco, Richard Hung, Barbara Lancaster, Andrea Levine, Signora May, Cynthia Scott, Niti Tandon, Amos Tevelow, Jonathan Ticehurst, and Timothy Wexler. To be determined September 2007 This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | For Census 2010, automation and information technology (IT) are expected to play a critical role. The Census Bureau plans to spend about $3 billion on automation and technology that are to improve the accuracy and efficiency of census collection, processing, and dissemination. From February 2006 through June 2009, the Bureau is holding a ''Dress Rehearsal'' during which it plans to conduct operational testing that includes decennial systems acquisitions. In October 2007, GAO reported on its review of four key 2010 Census IT acquisitions to (1) determine the status and plans, including schedule and cost, and (2) assess whether the Bureau is adequately managing associated risks. This testimony summarizes GAO's report on these key acquisitions and describes GAO's preliminary observations on the performance of handheld mobile computing devices used during the Dress Rehearsal. As of October 2007, three key systems acquisitions for the 2010 Census were in process, and a fourth contract had recently been awarded. The ongoing acquisitions showed mixed progress in meeting schedule and cost estimates. Two of the projects were not on schedule. The award of the fourth contract, originally scheduled for 2005, was awarded in September 2007. In addition, one project had incurred cost overruns and increases to its projected life-cycle cost. As a result of the schedule changes, the full complement of systems and functionality that were originally planned will not be available for upcoming Dress Rehearsal operational testing. This limitation increases the importance of further system testing to ensure that the decennial systems work as intended. The Bureau's project teams for each of the four IT acquisitions had performed many practices associated with establishing sound and capable risk management processes, but critical weaknesses remained. Three project teams had developed a risk management strategy that identified the scope of the risk management effort. However, not all project teams had identified risks, established mitigation plans, or reported risks to executive-level officials. For example, one project team did not adequately identify risks associated with performance issues experienced by handheld mobile computing devices, even though Census field staff reported slow and inconsistent data transmissions with the device during the spring Dress Rehearsal operations. The magnitude of these difficulties is not clear, and the Bureau has not fully specified how it plans to measure the performance of the devices. Until the project teams implement key risk management activities, they face an increased probability that decennial systems will not be delivered on schedule and within budget or perform as expected. |
Arlington is distinct among national cemeteries in several respects. First, although all national cemeteries honor the service of and sacrifices made by members of the armed forces, significant national events—such as the burials of Unknown Soldiers and of prominent public figures such as John F. Kennedy—have identified Arlington as a place of special recognition. Second, almost all other national cemeteries are administered by the Department of Veterans Affairs (VA), but Arlington is administered by the Department of the Army. In addition, eligibility requirements for burial in Arlington are much more restrictive than the requirements of other national cemeteries. Requirements for burial in Arlington were identical or similar to those of other national cemeteries until 1967, when the Army imposed stricter standards to ensure that burial space would remain available at Arlington for many more years. Individuals who are eligible for burial at Arlington include service members who have died while on active duty; retired service members meeting certain qualifications; and holders of the nation’s highest military decorations, such as the Medal of Honor, Distinguished Service Cross, Distinguished Service Medal, Silver Star, or Purple Heart. (App. I provides a more detailed list of requirements for burial at Arlington and at other national cemeteries.) be full by 2025, given the expected burial rates, unless the cemetery is expanded. Since 1980, Arlington has offered inurnment of cremated remains in its columbarium complex, which currently contains about 20,000 niches, with an additional 30,200 niches either planned or under construction. Any honorably discharged veteran, as well as his or her spouse and dependent children, may be inurned in the columbarium. The columbarium was intended as an effort to deal with the problem of limited burial space at Arlington and as an alternative for those who wish to be buried in the cemetery but do not meet its stringent requirements. As of December 1997, the remains of about 22,000 individuals had been inurned in about 19,500 of the columbarium’s niches. The Secretary of the Army is responsible for the development, operation, maintenance, and administration of Arlington and for forming plans, policies, procedures, and regulations pertaining to the cemetery. The Secretary has delegated the functions of Arlington burial policy formulation and oversight, including the responsibility for making recommendations to the Secretary on requests for waivers, to the Assistant Secretary of the Army for Manpower and Reserve Affairs. The superintendent of Arlington is the primary caretaker of the cemetery. This individual is responsible for its day-to-day operations, including arranging, coordinating, and scheduling funerals; maintaining good relations with and supplying information to the public; and obtaining or verifying relevant documents or data. The superintendent also makes recommendations on waiver requests. Given the nature and circumstances of burial requests, Army officials emphasized to us the urgency involved in responding to those requesting interment in Arlington. Therefore, these officials attempt to respond to requests for burial within 24 to 48 hours. Our review of Army files indicated that since 1967, 196 waivers for burial in Arlington have been granted, while at least 144 documented waiver requests have been denied. The rate at which waivers have been granted has increased steadily since 1967: about 17 percent of the 196 waivers were granted during the first 15 years that waiver decisions were being made, while 83 percent of these waivers were granted during the past 15 years. About 63 percent of the 196 waivers granted involved burial of an individual in the same grave site as someone already interred or expected to be interred. Also, about 42 percent of the total waivers were for individuals with military service. About 18 percent of waivers granted for burial in a new grave site were for individuals who did not have military service. (App. II provides additional data on waiver decisions.) Over the past 30 years, changes have occurred in the extent to which Presidents have chosen to be involved in waiver decisions. Before 1980, all waiver approvals were made by the President, but since then, 72 percent of the approvals have been made by the Secretary. Although the Secretary did not grant waivers from 1967 to 1979, he did deny at least 64 requests during that time. The Army’s philosophy toward waiver decisions has also evolved since 1967. While precisely reconstructing the basis for this evolution is difficult, our review of documents from the late 1960s and the 1970s and our discussions with a former superintendent of Arlington indicate that the Army had been very reluctant to approve waivers as a matter of policy. This is reflected in a 1969 memorandum from the Army Special Assistant (Civil Functions) to the Secretary that stated, “Since the restrictive eligibility regulations for Arlington were promulgated . . . we have received many requests for exceptions . . . . These requests have been uniformly denied and the regulation rigidly enforced since, if an exception is authorized in one case, it is impossible to deny it in others.” A 1971 memorandum from the Under Secretary of the Army to the Secretary states that “Although decisions . . . are difficult to make, in the long run it is equitable to all involved and prevents an early closing of the Cemetery.” The memorandum goes on to say that many waivers have been denied since 1967 and that “To change the rules at this time would raise havoc.” The former superintendent explained to us that, sometime around 1980, the White House expressed a desire to be less involved with waiver decisions on a regular basis and to shift more of these decisions to the Army. At around the same time, the Army appears to have adopted a more lenient approach to granting waivers, in part, because of the number and types of cases that had been approved by the President in the past. Although the Secretary of the Army and the President do not have explicit legal authority to grant exceptions to the eligibility requirements now in effect for burial at Arlington, there is a legal basis for the Army’s long-standing assertion of that authority. In 1973, the Congress, in the National Cemeteries Act (P.L. 93-43), expressly preserved the existing functions, powers, and duties of the Secretary of the Army with respect to Arlington while, at the same time, repealing the prior law that specified who was eligible for burial at national cemeteries. This left no explicit legal restrictions on the Secretary’s authority over burials at Arlington; the Secretary could decide on criteria for admission as well as on waivers. The committees, in reporting on the bill, said that a provision giving VA explicit authority to grant waivers for the national cemeteries under its jurisdiction would be analogous to “similar authority” already residing with the Secretary of the Army regarding Arlington. Department of the Army officials have, on several occasions since 1967, examined the issue of the Secretary’s and the President’s legal authority for granting waivers and have acknowledged that no explicit authority exists. In 1976, the Army General Counsel stated that “it would be desirable to specifically recognize this authority” in legislation pertaining to Arlington. In 1983 and 1984, the Army General Counsel recommended that legislation be proposed to give the Secretary (and, by extension, the President) such authority. The General Counsel advised the Secretary that “Public recognition of your explicit authority to approve exceptions to burial eligibility policy represents sound administrative practice.” concern about this provision and mentioned “possible problems of drawing the general public’s attention to exception authority.” Because of these concerns, the Secretary decided not to pursue a change in official Army policy, according to a memorandum from the military assistant in the Office of the Assistant Secretary of the Army. Army officials told us that, in February 1997, they submitted a legislative proposal that would have explicitly defined both the Secretary’s authority to grant waivers as well as some broad categories of individuals who could be considered for waivers. However, these officials explained that this was done as a technical drafting service and that they did not necessarily support such legislation. According to these officials, no action was taken by the Congress on this legislation. Most waiver requests have been handled through an internal Army review process involving officials responsible for the administration of Arlington. But this process has not been established through formal rule-making, and access to and knowledge of the process may vary widely among those inquiring about burial at Arlington. In addition, the Army waiver review process is not followed in all cases, particularly in those cases in which the President makes a waiver decision. case file, including all recommendations and records of concurrence or nonconcurrence, is then sent to the Secretary of the Army, who makes the final decision to approve or deny the exception request. All of these actions typically occur within 48 hours in order to respond quickly to surviving family members. According to officials involved in the process, this expediency imposes certain limitations on the extent of information obtained and the ability to verify this information. For example, in cases in which an exception is requested to allow the burial of one family member with another, the superintendent indicated to us that he asks for information about family relationships but does not always verify the information he receives. Similarly, he does not always obtain the consent of other family members who may have a claim to burial in that same grave. In contrast with decisions issued by the Secretary of the Army, presidential decisions appear to involve little, if any, consultation with Department of the Army officials. In addition, the reasons for presidential waiver decisions are generally not explained. For most presidential waivers, the Army is simply informed of the President’s decision to grant a waiver. For example, in one case, the President authorized a waiver for a prominent public figure who was still alive. Army officials said they were not consulted on this matter. Army documents indicate that the Assistant Secretary did not favor such a waiver because the Army’s policy was not to approve waivers before the death of an individual and that doing so in this case would set a precedent for future waiver decisions. To the extent that decisions are made outside of the normal process, perceptions of inequitable and arbitrary treatment, such as those suggested in the media, may result. Although a waiver process exists, it has not been formally established in regulatory policy. Individuals inquiring about burial at Arlington are not necessarily provided the same information—or any information at all—regarding the possibility of obtaining a waiver. The superintendent or his or her staff make a case-by-case judgment about the type of information to provide to those making inquiries about burial eligibility and the possibility of a waiver. Some individuals who inquire about burial at Arlington on behalf of another and are told that the person on whose behalf they are making the request is not eligible for burial at Arlington may not know that a waiver can be pursued. But others, who are aware of this possibility, may choose to pursue it. According to the superintendent, upon making an initial request for a burial waiver and being informed that such a request cannot be granted, some requesters abandon their attempt to obtain a waiver. But others persist in their efforts and may contact a high-level government official, such as a congressional or administration official, in order to pursue their request. Some Army officials believe that these factors can make a difference in the outcome of waiver requests and whether such requests are even made. In 1984, the Army General Counsel told the Secretary of the Army that “requests for exceptions mostly come from those people possessing information . . . not available to the general public.” The General Counsel added that “initial requests for exceptions made to Arlington . . . are not treated uniformly” and that “the prior knowledge and persistence of the individual often determines what information is provided.” According to the General Counsel “a basic question of fairness raised by the operation of this type of ‘secret’ agency practice.” When a high-level government official (outside the Department of the Army) either makes the waiver request or expresses support for the request, the waiver process can be vulnerable to influence. For example, in a case in which the Secretary of the Army approved a waiver despite the superintendent’s recommendation to deny, Army officials recommended that the waiver request be approved because of congressional interest and to avoid possible White House action. The Secretary of the Army told us, however, that his decision was not influenced by these factors. high-level officials such as the Secretary of Defense, the selective involvement of such officials in such a sensitive process could result in inconsistencies and perceptions of unfairness in waiver decisions. Although these cases indicate that involvement of high-level officials may, in some cases, influence the waiver process, our review also identified many cases in which such involvement did not result in a waiver approval. In addition, we found no evidence in the records we reviewed to support recent media reports that political contributions have played a role in waiver decisions. Where the records show some involvement or interest in a particular case on the part of the President, executive branch officials, or Members of the Congress or their staffs, the documents indicate only such factors as a desire to help a constituent or a conviction that the merits of the person being considered warranted a waiver. In December 1997, the Department of the Army, in response to recent criticism, imposed new requirements for providing information to those who inquire about burial at Arlington in an effort to ensure consistent treatment of all individuals. The Army also required that the names of those who are granted waivers be published and that such information be communicated to the proper congressional committees. No written criteria exist for determining when a waiver should be granted or denied. As a result, waiver requests that appear to be based on similar circumstances sometimes result in different outcomes. The officials we spoke with said that these decisions involve the exercise of much discretion and individual judgment. In other words, waivers, by their very nature, involve unique circumstances for which specific criteria cannot be developed to cover all cases, according to these officials. of one or more of these factors as a reason to approve or deny a waiver request. But it is sometimes unclear how officials weigh each factor and make a final decision on the basis of the combination of these factors. As a result, the reasons cited for a waiver approval in some cases may be similar to circumstances present in other cases that resulted in a waiver denial. The problem of unclear waiver criteria is demonstrated by the seemingly contradictory decisions and recommendations made by Army officials on the same cases. Since 1993, there have been 12 cases in which the Secretary or Acting Secretary of the Army has approved a waiver request despite the superintendent’s or Assistant Secretary of the Army’s recommendation that he disapprove the request. In three of these cases, the Secretary reversed his own initial waiver decision, deciding to approve waiver requests that he had originally denied. Our review of the records for waiver cases decided during the tenure of the current superintendent showed that although the bases for waiver decisions were frequently cited by the superintendent and the Assistant Secretary of the Army, this was not always the case for decisions made by the Secretary of the Army and was rarely the case for presidential waiver decisions. In addition, the rationale for waiver decisions made in the years before the current superintendent’s tenure, whether by the Secretary or the President, was often undocumented. Given the recent controversy concerning waiver decisions, the maintenance of clear and complete records of waiver decisions by both the Army and the White House may help to reduce questions about waiver decisions. Some Army officials explained that waiver decisions are inherently discretionary and, as such, will involve differences in opinion among officials. These officials do not believe that such differences necessarily indicate unfair or arbitrary treatment. Rather, they emphasize that they take these decisions very seriously and recognize their role in preserving the integrity of Arlington. Officials we spoke with did not believe that it would be helpful or even feasible to develop and formalize a specific list of criteria for making waiver decisions because this would be contrary to the very nature of the Secretary’s discretionary authority. in combination with the constraints of limited space, has caused the Army to impose strict eligibility requirements for burial at Arlington. These requirements have, in turn, resulted in the exclusion from Arlington of many individuals who served honorably in the military. Although the need to carefully scrutinize Arlington burial waiver decisions and ensure that such waivers are rare has been consistently acknowledged, the number of waivers allowed has grown steadily since they were first granted in 1967. In light of the diminishing capacity of the cemetery and the public attention to waivers, waiver decisions are likely to continue to be the focus of concern and criticism on the part of veterans’ groups and the American public. To the extent that the authority, process, and criteria for granting waivers are unclear, inconsistent, or unknown to the public, this criticism will likely continue. While there is a legal basis for the Secretary of the Army and the President to make waiver decisions and to adopt procedures for doing so, this authority is not explicit. This lack of explicit authority has been cited in the past by various Army officials as something that could raise questions about waiver decisions made by the Secretary. Although Army officials have, in the past, proposed that legislation or regulations be enacted to make this authority explicit, they currently do not support such legislation or regulations. Another area of uncertainty relates to the process used to review waiver cases and make waiver decisions. The process has not been clearly and consistently communicated to all individuals who have inquired about eligibility for burial in Arlington and has not been made generally public. As a result, the ability to get access to the process can vary on the basis of the persistence and knowledge of the individual requester. In addition, the process differs according to whether the President or the Secretary of the Army is making the waiver decision and is vulnerable to influence or intervention from officials outside the normal process. Recent actions by the Secretary of the Army to improve the consistency with which the waiver process is applied will likely help in diminishing the suspicions and concerns regarding the fairness of the process. No action has been taken by the Army, however, to adopt regulations governing the waiver process or to improve the Army’s communication surrounding and involvement in presidential waiver decisions, although the Army may be constrained in its ability to influence this aspect. The absence of clear, written criteria to evaluate waiver requests has also served as a basis for perceptions of inequity and inconsistency in waiver decisions. Waiver decisions made by the Secretary of the Army appear in some cases to be inconsistent with criteria applied in other cases. This is particularly true in cases in which the Secretary’s decision does not follow the recommendations of other Army officials. Moreover, presidential decisions are typically made without explicit reference to criteria. Given the current controversy over waiver decisions, several options are available for addressing these problems, including the following: Revising the eligibility requirements for burial in Arlington to include certain categories of people who generally are approved for waivers, such as remarried spouses or other family members who request to be buried in the same grave as someone who is already buried in Arlington. Under such a change, these categories of individuals, which constituted about 63 percent of the waiver approvals we examined, would be automatically eligible and would not therefore go through the waiver process. Eliminating the Secretary’s and the President’s authority to grant waivers. This could, however, prevent the burial at Arlington of someone who is generally recognized as deserving of that honor but does not meet the cemetery’s strict burial standards. Preserving some discretion to grant waivers, but providing guidance in legislation for the officials who exercise the waiver authority. While we agree with Army officials that it is not possible to establish criteria to cover all circumstances, some general guidance would serve to ensure that the exercise of discretion by the Army is not unlimited. Expanding the acreage of Arlington to accommodate more grave sites, thereby easing concerns over limited space. The feasibility of this option would need to be examined in terms of the land available near Arlington for annexation and the cost of acquiring such land. These options could be adopted individually or in various combinations. Each has its own advantages and disadvantages and must be carefully considered in light of the basic purpose of Arlington. Regardless of which option is considered, we believe it is important that the use of waiver authority be sound and that the waiver process be publicly visible. Mr. Chairman, this concludes my statement. I will be happy to answer any questions that you or other Members may have. Individuals eligible for burial at Arlington include the following: Any active duty member of the armed forces, except those members serving on active duty for training only. Any retired member of the armed forces, who has service on active duty (other than for training), is on a retired list, and is entitled to receive retirement pay. If, at the time of death, a retired member is not entitled to receive retirement pay, he or she will not be eligible for burial. Any former member of the armed forces separated for physical disability before October 1, 1949, who has served on active duty and who would have been eligible for retirement under 10 U.S.C. 1202 had the statute been in effect on the date of separation. Any honorably discharged member of the armed forces who has been awarded a Medal of Honor, Distinguished Service Cross, Distinguished Service Medal, Silver Star, or Purple Heart. People who have held the following positions, provided they were honorably discharged from the armed forces: an elective office of the U.S. government; Chief Justice of the United States or Associate Justice of the Supreme Court of the United States; an office listed in 5 U.S.C. 5312 or 5 U.S.C. 5313 (level I and II executives); and chief of a mission if he or she was at any time during his or her tenure classified in class I under the provisions of 60 Stat. 1002, as amended (22 U.S.C. 866, 1964 ed.). Any former prisoner of war who served honorably, whose military service terminated honorably, and who died on or after November 30, 1993. The spouse, widow, or widower; minor child; and, at the discretion of the Secretary of the Army, unmarried adult child of any of the people listed above. A surviving spouse who has remarried and whose remarriage is void, terminated by death, or dissolved by annulment or divorce by a court regains eligibility for burial in Arlington. An unmarried adult child may be interred in the same grave in which the parent has been or will be interred, provided that child was incapable of self-support up to the time of death because of physical or mental condition. Widows or widowers of service members who were reinterred in Arlington as part of a group burial may be interred in the same cemetery but not in the same grave. The surviving spouse; minor child; and, at the discretion of the Secretary of the Army, unmarried adult child of any person already buried at Arlington. The parents of a minor child or unmarried adult child whose remains are already buried at Arlington on the basis of the eligibility of a parent. Individuals eligible for burial at VA’s national cemeteries include the following: Any person who served on active duty in the armed forces of the United States (Army, Navy, Air Force, Marine Corps, or Coast Guard) who was discharged or released therefrom under conditions other than dishonorable. Any member of the armed forces of the United States who died while on active duty. Any member of the reserve components of the armed forces, the Army National Guard, or the Air National Guard whose death occurs under honorable conditions while hospitalized or undergoing treatment, at the expense of the United States, for injury or disease contracted or incurred under honorable conditions while performing active duty for training, inactive for duty training, or undergoing that hospitalization or treatment at the expense of the United States. Any member of the Reserve Officers’ Training Corps of the Army, Navy, or Air Force whose death occurs under honorable conditions while attending an authorized training camp or on an authorized practice cruise; performing authorized travel to or from that camp or cruise; or hospitalized or undergoing treatment, at the expense of the United States, for injury or disease contracted or incurred under honorable conditions while attending that camp or on that cruise, performing that travel, or undergoing that hospitalization or treatment at the expense of the United States. Any citizen of the United States who, during any war in which the United States is or has been engaged, served in the armed forces of any government allied with the United States during that war; whose last such service terminated honorably; and who was a citizen of the United States at the time of entry on such service and at the time of death. The spouse of any person listed above or any interred veteran’s unremarried surviving spouse. A veteran’s minor child (under 21 years of age or under 23 years of age if pursuing a course of instruction at an approved educational institution), or unmarried adult child who was physically or mentally disabled and incapable of self-support, in the same grave with the veteran or in an adjoining grave site if that grave was already reserved. Such other people or classes of people as may be designated by the Secretary of VA. The following tables provide data on waiver decisions made under the various administrations since 1967. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the waiver process for burials in Arlington National Cemetery, focusing on: (1) the trends in waiver decisions; (2) whether legal authority exists to grant waivers; (3) the process used in making waiver decisions; and (4) the criteria applied in the decisionmaking. GAO noted that: (1) since 1967, 196 waivers have been granted for burial at Arlington cemetery, and at least 144 documented requests have been denied; (2) of the granted waivers, about 63 percent involved burial of individuals in the same grave site as someone already interred, or expected to be interred; (3) although the Secretary of the Army has no explicit statutory or regulatory authority to grant waivers, it is legal for the Secretary to do so, in part, because of the general legal authority of the Secretary for administering Arlington; (4) GAO found that most waiver requests have been handled through an internal Army review process involving officials responsible for the administration of Arlington; (5) however, this process is not followed in all cases; (6) for example, in the case of presidential waiver decisions, the Army process is generally bypassed; (7) in addition, this process is not widely known or understood, which in some cases has appeared to provide advantages to those who were persistent enough to pursue a waiver request or who were able to obtain the assistance of high-level government officials; and (8) while those responsible for making waiver decisions appear to apply some generally understood criteria, these criteria, which are not formally established, are not always consistently applied or clearly documented. |
EPA administers and oversees grants primarily through the Office of Grants and Debarment in the Office of Administration and Resources Management, 10 program offices in headquarters, and program offices and grants management offices in EPA’s 10 regional offices. Figure 1 shows the key EPA offices involved in grants activities for headquarters and regions, and figure 2 shows the states covered by the 10 regional offices. The Office of Grants and Debarment develops national grant policy and guidance. This office also carries out certain types of administrative and financial functions for the grants approved by program offices, such as awarding headquarter grants and overseeing the financial management of program office and regional grants. On the programmatic side, national program managers are responsible for establishing and implementing national policies for their grant programs, for setting funding priorities, and for identifying specific environmental results from grant programs. They are also responsible for technical and programmatic oversight of headquarter grants. Regional grants management offices provide administrative management for regional grants, while regional program offices provide technical and programmatic oversight. Both headquarters and regional program offices conduct grant competitions. EPA has designated officials—referred to as senior resource officials—who are typically deputy assistant administrators in program offices and assistant regional administrators. These senior resource officials are in charge of strengthening agencywide fiscal resource management while also ensuring compliance with laws and regulations and are responsible for effective grants management within their units. As of September 30, 2005, 119 grant specialists in the Office of Grants and Debarment, and the regional grants management offices, were largely responsible for administrative and financial grant functions. Furthermore, 2,064 project officers were actively managing grants in headquarters and regional program offices. These project officers are responsible for the technical and programmatic management of grants. Unlike grant specialists, however, project officers also have nongrant responsibilities, such as using the scientific and technical expertise for which they were hired. In fiscal year 2005, EPA took 6,728 grant actions involving funds totaling about $4 billion. These awards were made to six main categories of recipients, as shown in figure 3. EPA offers three types of grants—discretionary, nondiscretionary, and continuing environmental grants: Discretionary grants fund a variety of activities, such as environmental research and training. EPA has the discretion to independently determine the recipients and funding levels for these grants. EPA has awarded these grants primarily to state and local governments, nonprofit organizations, universities, and Native American tribes. In fiscal year 2005, EPA awarded about $644 million in discretionary grants. Nondiscretionary grants are awarded primarily to state and local governments and support water infrastructure projects, such as the drinking water and clean water state revolving fund programs. For these grants, Congress directs awards to one or more classes of prospective recipients who meet specific eligibility criteria, or the grants are often awarded on the basis of formulas prescribed by law or agency regulation. In fiscal year 2005, EPA awarded about $2.4 billion in nondiscretionary grants. Continuing environmental program grants contain both nondiscretionary (formula) and discretionary features. These grants are nondiscretionary in the sense that (1) they are awarded noncompetitively to the same government units to support ongoing state, tribal, and local programs that do not change substantially over time, and (2) allotments of funds are initially made on the basis of factors contained in statute, regulation, or agency guidance. These grants are also discretionary in the sense that allotments are not entitlements, and EPA exercises judgment in determining what the final award amount should be. In fiscal year 2005, EPA awarded about $1 billion in grants for continuing environmental programs. In this report, we focused on two EPA programs under the Clean Water Act: Wetland Program Development Grants (Wetland grants) Wetlands are areas where water covers the soil, or is present either at or near the surface of the soil throughout the year or for various portions of the year, including during the growing season. Wetlands, such as bogs, swamps, and marshes, support a number of valuable functions—controlling floods, improving water quality, and providing wildlife habitat, among other things. The wetland grants provide applicants with an opportunity to carry out projects to develop and refine comprehensive wetland programs. The authority for the program is under section 104(b) (3) of the Clean Water Act. Grant funding must be used to improve the wetlands program by conducting or promoting the acceleration of research and studies relating to the causes, effects, and other aspects of water pollution. Wetland grants provide states, tribes, local governments, interstate agencies, intertribal consortia, nonprofits, and nongovernmental organizations an opportunity to carry out wetland projects and programs. Wetland grants are discretionary grants. Nonpoint Source Management Program (Nonpoint source grants). Nonpoint source pollution is pollution that does not have a well-defined source but instead originates from a number of sources, such as acid mine drainage, agricultural runoff, and roads and highways. Under section 319(h) of the Clean Water Act, EPA makes grants to states, territories, and Indian tribes to support a wide variety of activities, including technical and financial assistance, education, training, technology transfer, demonstration projects, and monitoring. Nonpoint source grants are continuing environmental program grants. Grants from these two programs can be incorporated into the National Environmental Performance Partnership System, which was established in response to state needs for greater flexibility in using and managing their continuing grant funds. Under this system, states may enter Performance Partnership Agreements with EPA and into Performance Partnership Grants. The agreements set out jointly developed priorities and protection strategies, including innovative solutions for addressing water, air, and waste problems. The partnership grants allow states to combine continuing environmental program grant funds to implement those solutions. States can also enter into Performance Partnership Grants without a Performance Partnership Agreement. Under traditional continuing environmental program grants, states received funds to implement particular waste, air, water or other program—such funding can only be spent on activities that fall within the statutory and regulatory parameters of that program. Under Performance Partnership Grants, states can combine up to 21 separate grant programs into one award, and move funds from one media, such as air, to another. Both the wetland and nonpoint source grants are under the auspices of EPA’s Office of Water in headquarters, but grants under these programs are carried out by water program staff in regional offices. Moreover, in its national guidance, the Office of Water states that it is committed to accomplishing the goals that the Office of Grants and Debarment has identified in its Grants Management Plan. To do this, for example, the Office of Water provided regions with information on revised competition and environmental results policies. EPA has strengthened its award process by, among other things, (1) expanding the use of competition to select the most qualified applicants and (2) issuing new policies and guidance to improve the awarding of grants. However, EPA’s internal reviews of program and regional offices have found weaknesses in documenting the review of grantees’ cost proposals. We also found this weakness in one of the three regions we visited. This documentation weakness may hinder EPA’s ability to ensure the reasonableness of its grantees’ expenditure of federal funds. Because of the continuing problems with documenting cost reviews, EPA is reexamining its cost review policy for grants. To promote widespread competition for grants, in September 2002, EPA issued a policy that for the first time required competition for many discretionary grants. Before 2002, even though EPA had a competition policy, it did not compete grants extensively or provide widespread notification of upcoming grant opportunities. EPA’s 2002 policy was designed to promote competition in awarding grants to the “maximum extent practicable” and to ensure that the competitive process was “fair and open.” This policy represented a major cultural shift for EPA managers and staff, requiring EPA staff to take a more planned, rigorous approach to awarding grants. Specifically, the policy was binding on managers and staff throughout the agency; required EPA staff to determine evaluation criteria for grant solicitations and publish a grant announcement at least 60 days before the application deadline; and created the position of a senior-level competition advocate for grants. The advocate oversees the policy’s implementation and compliance and evaluates its effectiveness. According to EPA’s Inspector General, the 2002 policy was a “positive step” toward promoting competition, and competitions under the policy were generally fair and open. Specifically, for the 38 grants that the Inspector General reviewed, EPA had (1) published an announcement soliciting proposals, (2) written procedures to ensure an objective and unbiased process for reviewing and evaluating applications, and (3) selected recipients according to reviewers’ recommendations. In 2004, EPA’s grants competition advocate reviewed the policy, as required, and reported, among other things, that steps should be taken to improve justifications for not competing certain grants by (1) increasing review and approval requirements for exceptions to competition and (2) clarifying the language in the policy to ensure appropriate use of exceptions. The advocate also found that the threshold for requiring competition for grants of $75,000 or more in the 2002 policy was too high. In response, EPA issued a revised competition policy, effective January 2005. It enhanced competition by, among other things, increasing review and approval requirements for justifying exceptions and clarifying the language to ensure appropriate use of these justifications; reducing the threshold for competition from $75,000 to $15,000; and strengthening requirements for documenting the competition process and results. In addition, EPA added (1) conflict-of-interest provisions to increase awareness of situations that could arise for applicants, reviewers, and others involved in competition matters; and (2) dispute procedures. In commenting on both the 2002 and 2005 policies, the Inspector General stated that the policies did not fully promote competition and recommended that EPA could further expand competition in the 2005 policy by eliminating certain remaining exemptions and exceptions for which the Inspector General believed competition is practicable. EPA responded, however, that further expansion was not practicable for reasons of congressional intent, regulatory limitations, and program effectiveness. EPA’s Grants Management Plan lays out goals, objectives, milestones, and performance measures with targets for promoting competition. For one of these objectives, EPA planned to improve the accuracy and specificity of information available to the public on the agency’s grant opportunities in the federal government’s Catalog of Federal Domestic Assistance—a listing of available grants and other federal funding opportunities (available at www.CFDA.gov). However, as we reported in 2005, EPA was not consistently providing this information. Without such information, potential applicants might not apply, and EPA would not have the broadest applicant pool from which to select grantees. EPA officials were unaware of continuing problems with funding priorities and funding levels in the Catalog of Federal Domestic Assistance until we brought them to their attention during our review. In response to our recommendations, in April 2005, EPA implemented revised guidance for providing complete and accurate information in the Catalog of Federal Domestic Assistance. For example, EPA now strongly encourages its offices to provide information on the funding priorities on an ongoing basis, instead of annually, so that the public has up-to-date information in the Catalog of Federal Domestic Assistance. For the competition goal, the agency developed a performance measure for increasing the percentage of new grants subject to the competition policy that are actually competed and set increasing targets for achieving this measure. According to EPA, about $249 million of the approximately $3.1 billion it awarded in new grants in fiscal year 2005 were eligible for competition. EPA exempts certain grant categories from competition, including all nondiscretionary grants, certain discretionary grants, and most continuing environmental programs grants. EPA also established a separate measure for nonprofit grantees. The first performance measure is for all new eligible grants, including new grants to nonprofit recipients, and the second is only for new eligible grants to nonprofit recipients, as table 1 shows. EPA established a separate measure for competing grants to nonprofit organizations because it believes that selecting the most qualified nonprofit applicants through a competitive process could address concerns about the effectiveness of nonprofit grantees in managing grants. As the table shows, EPA reports it now competes a higher percentage of eligible grants, up from 27 percent in fiscal year 2002 to 93 percent in fiscal year 2005, exceeding its targets for fiscal years 2003 through 2005. The 7 percent of new grants that EPA reported it did not compete—which totaled about $10 million of the $249 million eligible for competition in fiscal year 2005—resulted from exceptions to the policy. EPA’s competition policy provides for exceptions that meet criteria specified in the policy, if supported by a written justification and approved by an appropriate official. Even after taking the exceptions into account, EPA exceeded the 85 percent target it set for new grants in 2005. It has also exceeded its target for new grants to nonprofit recipients in 2005. To improve the award of grants, EPA issued additional policies and guidance. Specifically: In January 2005, EPA issued a policy to improve the description of the grant in its grants database so that the description would be understandable to the public. EPA now presents this information on the Office of Grants and Debarment’s Web site—www.epa.gov/ogd—so that the public has improved access to grant information. In March 2005, EPA issued a policy establishing additional internal controls for awarding grants to nonprofit organizations. The policy addresses both the programmatic capability of a nonprofit applicant to carry out a project and its administrative capability to properly manage EPA grant funds—problems EPA and the Inspector General have identified. Under the policy, EPA assesses programmatic capability for both competitive and noncompetitive grants. The policy also requires the agency to conduct different types of administrative capability reviews based on the amount of the grant to the nonprofit organization. For grants of $200,000 or more, applicants must complete a questionnaire and provide documents to show that they have administrative and financial systems to manage grants. For grants below the $200,000 threshold, EPA staff must query the agency’s grants database for any findings of problems in the applicant’s administrative capability. If problems are identified in any of these reviews, the applicant must take corrective actions before receiving the grant. In 2005, EPA approved 75 of the 87 nonprofit organizations it reviewed; the remaining 12 nonprofit organizations are taking steps to address problems identified. Also in March 2005, EPA issued a memorandum clarifying the criteria that must be documented to justify the use of a grant or a contract as the award mechanism. EPA issued this guidance in response to a recommendation in our 2004 report to better document the justification for using grants rather than contracts. In April 2005, EPA issued a policy memorandum and interim guidance establishing a certification process that applies to certain discretionary grant programs (currently 58). The new policy and guidance instruct senior EPA officials—assistant administrators and regional administrators—to certify, among other things, that (1) certain grant awards and amendments identify environmental outcomes that further the goals and objectives in the agency’s strategic plan and (2) there is no questionable pattern of repeat awards to the same grantee. For competitive announcements, these officials must certify that the (1) expected outcomes from the awards under the proposed competitive announcement are appropriate and in support of program goals and (2) proposed competitive announcement is written in a manner to promote competition to the maximum extent practicable. The Office of Grants and Debarment has assigned a grant specialist to conduct random spot checks of these certifications and provide assistance to program offices in implementing this new policy. While EPA has improved its award process, both EPA and we found weaknesses in the agency’s documentation of its cost reviews before awarding grants. EPA policy requires both the grants management and program offices to conduct a cost review for every grant before awarding it to ensure that the grantee’s proposed costs are necessary, reasonable, allowable, allocable, and adequately supported. These reviews are central to ensuring that EPA carries out its fiduciary responsibilities. However, in 2004 and 2005, in six of the seven program and regional offices it reviewed, the Office of Grants and Debarment either found no documentation of cost reviews or found that documentation was not sufficient. As a result of these continuing documentation problems, EPA is reexamining its cost review policy for grants. We also found problems with cost review documentation in one of the three regions we visited—Region 5. This region has a checklist to ensure that staff members who are responsible for each aspect of the cost review had completed and documented their review before awarding a grant. The checklist requires approval from both the grant specialist and the project officer on certain items and requires supervisors to review the checklist to ensure that any concerns raised by the project officer or grant specialist were addressed. While a project officer and grant specialist could initially disagree on some aspects of the checklist, the regional office expects them to resolve their differences and document the final resolution on the checklist. However, for most of the 12 approved award files we reviewed, we found instances in which the resolution of the issues was not documented. Specifically, the grant specialist and the project officer had both neglected to answer the same two questions on the cost review checklist, or the grant specialist and the project officer did not agree on the answers to multiple questions on the checklist and did not document any resolution of their disagreements. According to regional staff, these problems occurred because of workload and errors. Nevertheless, the lack of documentation for awarded grants raises concerns about the appropriateness of the award. More effective supervisory review might have resulted in a documented resolution of these differences. EPA has improved some aspects of monitoring, but long-standing problems in documentation and grant closeouts continue. EPA has made progress in using in-depth monitoring to identify grantee problems agencywide, but it does not always document whether corrective actions have been taken. Furthermore, for ongoing monitoring, the agency found, as we did in the regional offices, that in some cases agency staff do not consistently document their monitoring of grantees, which hinders accountability for effective grants management. Finally, we found that grant closeouts were often delayed and sometimes improperly carried out, which diminishes EPA’s ability to ensure that grantees met the terms and conditions of their award and that grant funds were spent appropriately. EPA has formed a work group to review its monitoring and closeout polices and plans to revise these policies in 2006. EPA has made progress in conducting in-depth monitoring since it issued a new monitoring policy in December 2002, which it revised in 2004 and 2005. Under its monitoring policy, grants management offices and program offices in headquarters and the regions conduct in-depth monitoring either (1) at the grantee’s location (on-site) or (2) at an EPA office or at another location (off-site)—referred to as desk reviews. EPA’s policy for these reviews requires the following, among other things: Grants management offices must conduct in-depth administrative reviews, on a minimum of 10 percent of grantees annually, to evaluate the grantee’s administrative and financial capacity. For on-site administrative reviews, EPA conducts “transaction testing”—that is, reviewing a grantee’s accounting ledgers and underlying documentation for unallowable costs, such as lobbying and entertainment expenditures. Program offices must conduct programmatic reviews on a minimum of 10 percent of grantees annually to assess the grantees’ activities in key areas, such as the progress the grantees are making in conducting the work and in meeting the grant’s terms and conditions. In 2003, we reported that although the in-depth review is a useful tool for monitoring a grantee’s progress, the agency lacked a way to systematically identify grantee problems agencywide because the information from its in- depth monitoring was gathered in a form that could not be readily analyzed. We also found that the policy did not incorporate a statistical approach to selecting grantees for review. Without a statistical approach, EPA could not evaluate whether 10 percent was appropriate, nor could it project the results of the reviews to all EPA grantees. We recommended that EPA take action to address these issues. EPA has since incorporated the data from its in-depth monitoring into a database, analyzed the information to identify key problems, and taken corrective actions to address systemic problems. By taking these actions, EPA has found, among other things, that grantees have not had documented policies and procedures for managing grants. Without these policies and procedures, grantees may not be able to operate their financial and administrative systems appropriately. As a result of this finding, EPA is conducting the preaward reviews discussed earlier to ensure that nonprofit grantees have required financial and administrative systems in place. EPA has also increased training to grantees. Since issuing its most recent revision to the monitoring policy in 2005, EPA has initiated several practices that should further strengthen in-depth monitoring. In 2006, it began incorporating a statistical approach for selecting grantees for administrative in-depth reviews. In 2007, EPA plans to use a statistical approach to select grants for programmatic in-depth reviews. When the statistical approach is fully implemented, it should significantly reduce the percent of grantees reviewed, according to an agency official. Furthermore, the statistical approach will enable the agency to project results among various types of grantees. EPA also began incorporating transaction testing into administrative desk reviews in 2006 because it found that administrative desk reviews were not otherwise yielding adequate financial information about grantees. While EPA has improved its in-depth monitoring, the Office of Grants and Debarment has found that staff do not always take corrective actions, or document actions taken, to address findings identified during this monitoring. The office found that corrective actions were documented for only 55 percent of the 269 problems identified through administrative and programmatic reviews. We reported similar results in August 2003. According to an Office of Grants and Debarment official, while some EPA staff took corrective actions, they did not document those actions in EPA’s grantee computer database. Until this problem is addressed, the Office of Grants and Debarment will not be able to fully assess the extent to which corrective actions have or have not been taken to address identified grantee problems. Without these assessments, EPA cannot be assured that grantees are in full compliance with the terms and conditions of their grants. Ongoing monitoring is critical because, in contrast to in-depth monitoring, it is conducted on every grant at least once a year throughout the life of the grant, and the results are used to determine whether the grantee is on track to meeting the terms and conditions of its grant agreement. EPA’s grant specialist and project officer manuals—used as training tools for EPA staff involved in grants—emphasize that staff should properly document grant monitoring activities to maintain an official agency record. Agency officials state that proper documentation of monitoring is necessary to ensure that third parties—such as other EPA staff who assume responsibility for the grant or a supervisor—can fully understand and review the actions that have occurred during the project period. Moreover, a lack of documentation raises questions about the adequacy of project officers’ and grant specialists’ ongoing monitoring of grantee performance. Despite the importance of documenting ongoing monitoring, the absence of documentation in grant files has been a long-standing problem that we reported on in 2003. To conduct ongoing monitoring, EPA policy requires the following: Grant specialists should ensure that administrative terms and conditions of the grants are met and review the financial status of the project. The grant specialist is to speak with the project officer and the grantee at least annually during the life of the grant. Project officers should ensure that programmatic award terms and conditions are being met, including ensuring that they have received progress reports from the grantee. Project officers are also to speak with the grant specialists and the grantee at least annually during the life of the grant. According to the monitoring policy, the grant specialists and project officers must document the results of their ongoing monitoring in their grant files. Despite this policy, the lack of documented ongoing monitoring remains a problem. EPA’s recent internal reviews in program and regional offices demonstrate—as did our review in three regional offices—that EPA grant specialists and project officers still do not consistently document ongoing monitoring. In 2004 and 2005, the Office of Grants and Debarment found limited or incomplete documentation of ongoing monitoring in internal reviews it conducted in seven program and regional offices. In addition, self-assessments completed by 11 program and regional offices during this period identified the same lack of documentation. Our analysis of these reviews indicates that several offices experienced recurring problems in 2004 and 2005. For example, an August 2004 Office of Grants and Debarment internal review cited one regional office as having “very limited” documentation of ongoing monitoring; and in the following year, the regional office’s self-assessment found the same documentation problem with project officer files. Because of these documentation problems, two of the three regional offices we visited have committed to using checklists to document their ongoing monitoring. Regions 1 and 9 had implemented such checklists at the time of our review. As table 2 shows, however, of the 40 project officer and grant specialist files we reviewed in Regions 1 and 9, more than half of the checklists were either missing, blank, or incomplete. The water program office in Region 5 also developed a checklist for documenting ongoing monitoring but had not yet implemented it at the time of our review. Consequently, in Region 5, we examined other documentation of ongoing monitoring in the grant files and found similar omissions. None of the six files requiring annual contact with the grantee—three grant specialist files and three project officer files—had documentation showing that this contact had occurred. In the three regions, we also found that project officers’ files did not always contain grantees’ progress reports, which can be required quarterly, semiannually, or annually, as defined by an individual grant’s terms and conditions. Thirteen of the 32 project officer grant files we reviewed in these regions were missing at least one or more progress reports required by the grant’s terms and conditions. According to EPA’s project officer manual, progress reports are the project officer’s primary mechanism for determining if the grantee is fulfilling its grant agreement obligations. In general, progress reports should contain information that compares grantee progress with the stated grant objectives, identify problems with meeting these objectives, and state the reasons for those problems. While the submission of progress reports is clearly the grantee’s responsibility, it is also the project officer’s responsibility to work with the grantees to ensure that they provide their progress reports in accordance with the terms of the grant. When EPA staff do not obtain progress reports, they cannot monitor effectively, which may hinder accountability. In the three regions we visited, the lack of documentation for ongoing monitoring occurs because of weaknesses at the staff, supervisory, and management level. First, grant specialists and project officers do not consistently document key monitoring efforts. For example, several staff stated that they had not printed out their e-mail correspondence with grantees or recorded those contacts in the official grant files. Other staff cited their workload as a reason for not documenting monitoring. Lack of documentation also occurs because grant specialists and project officers rely on other staff with technical expertise, known as “technical contacts,” to assist with ongoing monitoring, and these technical contacts may not provide the documented results of their monitoring for inclusion in the grant file. We found this situation had occurred in two of the three regions we visited. For example, one administrative project officer—a project officer who maintains files but is not necessarily knowledgeable about the technical aspects of the project—had asked for key monitoring documentation from a technical contact, who did not provide it. The technical contact had the monitoring documents in his work area and said he would routinely provide them to the project officer in the future. Second, the lack of ongoing monitoring documentation may occur, in part, because supervisors do not always effectively review grant files for compliance with grant policies. According to staff we interviewed in the three regions, to their knowledge, their supervisors had not reviewed their files to assess compliance with the agency’s monitoring policies, which could contribute to the lack of documentation. A regional project officer told us that he would have completed the ongoing monitoring checklist if his regional program supervisor had made it a priority. In another region, officials told us that some supervisors do review some files, but they do not have enough time to review every file. In contrast, supervisory review can contribute to complete documentation of ongoing monitoring. For example, Region 5 was cited as having “excellent” documentation for ongoing monitoring in an Office of Grants and Debarment 2003 internal review. According to the EPA supervisor in Region 5 at the time of the 2003 review, she had notified staff that she would review their grant files to assess compliance with EPA policy for ongoing monitoring, among other things. She believes that her review contributed to the region’s excellent rating. Third, senior EPA managers in the regions do not always ensure that their commitments to improve monitoring documentation are being met. For example, in the post-award monitoring plans submitted to the Office of Grants and Debarment for two of the EPA regions we visited, the plans stated that the regions would place a checklist in the grant specialist and project officer files documenting ongoing monitoring activities. Although the two regions developed the checklists, more than half of the checklists we reviewed were missing, blank, or incomplete. This occurred, in part, because senior managers did not ensure that the commitments they made were met in their post-award monitoring plans. Despite the importance of ongoing monitoring, EPA has not created a performance measure for documenting ongoing monitoring that would underscore its importance to managers and staff. Furthermore, EPA’s Integrated Grants Management System has a field for recording information about ongoing monitoring that could enable the agency to systematically identify whether this monitoring is documented agencywide, but recording this information is optional. Establishing a performance measure and/or requiring the entry of information could enhance accountability for implementing the monitoring policy. As part of its grant reforms, EPA incorporated grant closeout into its monitoring policy and its Grants Management Plan. During closeout, EPA ensures that the grant recipient has met all financial requirements and provided final technical reports, and ensures that any unexpended balances are “deobligated” and returned to the agency. Delays in closing out the grant can unnecessarily tie up obligated but unexpended funds that could be used for other purposes. Furthermore, according to EPA’s closeout policy, closeout becomes more difficult with the passage of time because persons responsible for managing various aspects of the project may resign, retire, or transfer; and memories of events are less clear. The monitoring policy states that the agency is committed to closing out grants within 180 days after the end of the grant’s project period. Under its monitoring policy, EPA provides 180 days for closeout because (1) grantees—by regulation and policy—have up to 90 days after the grant project period to provide all financial and technical reports; and (2) by policy, agency staff—grant specialists and project officers—have 90 days to review grantee information and certify that financial and technical requirements have been met. Following certification, the grant specialist closes out the grant with a letter to the grantee stating that the agency closed out the grant. EPA’s Grants Management Plan identified measures with targets that were developed to assess EPA’s closeout performance. In reviewing EPA’s management of grant closeouts, we found that EPA (1) has effectively reduced its historic backlog of grants due for closeout; (2) does not always close out grants in a timely way—within 180 days after the project period ends, as required by agency policy; and (3) does not always close out grants properly based on the regional files we reviewed. In the past, EPA had a substantial backlog of grants that it had not closed out. EPA reported that by 1995, the agency had amassed a backlog of over 18,000 completed grants that had not been closed out from the past 2 decades. In fact, EPA had identified closeout, among other things, as a material weakness—an accounting and internal control weakness that the EPA Administrator must report to the President and Congress. As we reported in 2003, however, EPA improved its closeout of backlogged grants, eliminating backlog as a material weakness. Specifically, for fiscal year 2005, using its historic closeout performance measure, EPA reported that it had closed 97.8 percent of the 23,162 grants with project end dates between the beginning of fiscal year 1999 and the end of fiscal year 2003. EPA came close to its 99-percent target of closing out this backlog. EPA developed a second closeout performance measure—which we call the current closeout performance measure—to calculate the percent of grants with project end dates in the prior fiscal year that were closed out by the end of the current fiscal year (September 30). For example, as table 3 shows, EPA closed out 79 percent of the grants with project end dates in fiscal year 2004 by the end of reporting fiscal year 2005 (September 30, 2005) but did not meet its performance target of 90 percent. EPA’s current closeout performance measure does not calculate whether EPA closed the grant within 180 days. Rather, this measure only reports whether EPA closed the grant by the end of the following fiscal year (the fiscal year in which it reports on closeouts—the reporting year). The measure, in fact, can allow for a much more generous closeout time, from 183 days beyond the 180 days to as much as 547 days (18 months) beyond the 180 days—because EPA does not report the performance measure until September 30, the end of the current fiscal year, as shown by hypothetical examples in table 4. EPA’s current performance measure for closing out grants is a valuable tool for determining if grants were ultimately closed out. However, we believe that this performance measure—taken alone—is not a sufficient way to measure closeout because it does not reflect the 180-day standard specified in EPA policy. To determine the percentage of grants that were closed within 180 days, we examined EPA’s analysis of closeout time frames for regional offices, headquarter offices, and agencywide. As table 5 shows, EPA is having significant difficulty in meeting the 180-day standard. As the table shows, agencywide, only 37 percent of grants with project end dates in fiscal year 2004 were closed out within 180 days, 25 percent were significantly late—at least 3 months beyond the 180-day standard, and 19 percent were not closed. Table 6 shows that EPA’s current performance measure is masking the fact that the agency is having significant difficulty in closing out grants within 180 days. In guidance on preparing the annual post-award monitoring plans, the Office of Grants and Debarment has indicated that agency offices should use the agency’s current closeout performance measure—90 percent of the grants with project end dates in the prior fiscal year—as the closeout goal. In effect, as a regional grants management office manager stated, the performance measure, not the 180-day standard, is the target EPA is working toward for closing out grants. At the regional level, our analysis of closeout data for the wetland and nonpoint source grant programs indicates that grants were closed out late because of (1) grantee delays and/or (2) internal delays within the agency. We reviewed 34 closed grants in three regions. First, as table 7 shows, grantees often submit their final financial and technical reports after the 90 days that they are allowed. According to regional staff, different types of grantees may be submitting their reports late for different reasons. Specifically: States do not always provide their final technical and financial closeout reports on time. The states may not be on time because, for example, they (1) are understaffed; (2) are awaiting the completion of work conducted by sub-grantees or subcontractors—which can be legitimately delayed because of weather conditions that affect the project’s progress; or (3) consider closeout a lower priority than applying for new grants. Furthermore, states do not believe there will be any consequences if they submit final reports late because their grants are for continuing environmental programs. Tribes may submit their final reports late because of high turnover among tribal staff and limited organizational capacity, and because tribal councils, which meet intermittently, must approve the reports. While grantees are responsible for providing final reports within 90 days, EPA staff are responsible for working with the grantees to ensure that the reports are received on time. Under EPA’s 1992 closeout policy, grant specialists must notify grantees 90 days before the project end date that final reports will be due 90 days thereafter. However, we found that regional staff do not always send out these letters. For example, Region 5 adopted the practice of reminding grantees 45 days before the project end date because the grants management office believes that the 90 days is too long in advance to be effective. However, several Region 5 grant specialists stated that their workload is also preventing them from sending out the 45-day letter to grantees. According to EPA’s closeout policy, if a grantee is late with the final financial or technical report, the region should send reminder letters that escalate in tone as time progresses. In Region 1, for example, if the grantee does not submit its materials within 90 days, the grants management office sends a letter asking that the grantee contact its grant specialist to discuss the reasons for the overdue reports; if the region does not receive the reports within 120 days after the project has ended, the grants management office sends a certified letter that more strongly calls for the submission of these required reports. When these letters do not result in grantee compliance, regional staff and managers told us that they have no realistic option for taking strong action against states that are late—such as withholding money—because these grantees have continuing grants for environmental programs. Second, late closeouts result from a variety of internal agency delays. As shown in table 8, of those files that had information that we could use to determine the dates the reports were submitted, regional staff closed out about half within the 90 days provided for in EPA guidance. For 9 of the 30 files that had this report information, it took the project officers or the grant specialists over 180 days to close out the grant after receiving the final reports from the grantees. For grant closeouts, generally, regional staff often cited workload as a factor contributing to delays in agency closeout. Delays also occurred because of peak workload periods during the year, such as the fourth quarter of the fiscal year, when regions generally give priority to awarding new grants. Officials in the three regions we visited also told us that they have transferred or will transfer the administrative and financial functions of grants closeout to EPA’s Las Vegas Finance Center, which should reduce the grant specialists’ workload, allowing them to focus on other aspects of grants management. Regional practices also may have contributed to delays in two of the three regions we reviewed. Region 5 had two practices that contributed to delays in closing out grants. First, the region uses technical contacts to assist with monitoring the wetland and nonpoint source grant programs, including closeouts. The project officer first reviews the grantee’s final reports to ensure they are complete and then asks the technical contact to comment on specific points and certify in writing that technical requirements have been met. The project officer then certifies in writing to the grant specialist that the grantee has met programmatic terms and conditions and, from that perspective, the grant can be closed out. Regional staff stated that in certain cases the added step of getting signoff by the technical contact resulted in closeout delays because the technical contact did not always review the grantee’s final reports in a timely way. Second, to address its closeout problem, the region’s grants management office attempted an administrative change to expedite closeout—having a single grant specialist manage closeout. When this approach did not prove effective, the region returned to its practice of having the original grant specialists responsible for closing out grants. According to regional staff, the transition to and from this process exacerbated delays in grant closeouts. The original grant specialists had other grant work and waited until that work was completed before closing out the grants that were returned to them. Region 5 had the lowest percentage of grants closing out within 180 days for all its programs among EPA’s 10 regions (16 percent for fiscal year 2005 as shown in table 5). Region 9 had delayed closures for continuing nondiscretionary grants, in part, because of a practice, discontinued in November 2004, of routinely carrying over unspent funds from these grants. That is, the region would not close out a grant until it had awarded a new grant. The unspent funds from the old grant would then be processed as an amendment to the new grant, in order to allow grantees to keep their unspent funds. For example, one state nonpoint source grant was closed 278 days beyond the 180 days because the project officer had asked the grant specialist to carry over $426,000 in unspent funds to the following year’s grant. Overall, a combination of grantee lateness and internal inefficiencies contributed to late closeouts. For example: In Region 5, it took 795 days—615 days beyond the 180-day standard—to close out a 2-year wetland grant for $56,778. The grantee submitted the final financial status report 114 days late because a key grant contact had died. However, it took the region an additional 591 days after the grantee provided the final reports to close out the grant. According to the grant specialist, closeout was delayed, in part, because of internal administrative delays and because the grant was “lost” under a stack of other closeout files. In Region 1, closure of a nonpoint source grant that provided $796,532 over 10 years was delayed primarily because of a lack of documentation. According to the project officer who inherited the file from a retiring employee, the file had unusually poor documentation, with no assurance that the grant’s terms and conditions had been met. Moreover, the state employee who assumed responsibility for the grant could not locate all the reports detailing how the grant money had been used. Consequently, it took the project officer nearly 5 months beyond the allotted 180 days to review available information, ascertain that grant activities had been completed, and close out the grant. According to some of the project officers and grant specialists staff with whom we spoke, the 180 days allowed for closeout in EPA’s policy is a reasonable amount of time. Moreover, some staff said that if more days were allowed, EPA might take longer. As noted in the closeout policy, as more time passes and the original grant specialists and project officers move on, it becomes more difficult to close out a grant. Finally, one regional official pointed out that if the deadline for closeout were extended, then unexpended funds would go unused for longer periods of time, which would tie up funds that could have been used for other purposes. We note, however, that EPA still has a 1992 closeout policy that is not consistent with its current monitoring policy. Specifically, although both the 1992 closeout policy and the monitoring policy state that closeout should occur within 180 days after the end of the project period, the 1992 policy also states that closeout should occur within 180 days after receipt of all required reports and other deliverables. This aspect of the 1992 policy could be construed to mean that EPA has up to 270 days to close out grants since grantees have up to 90 days to submit their reports. Office of Grants and Debarment officials stated that EPA has formed a work group to review its monitoring and closeout polices. As part of its review, the office plans to examine this inconsistency and the reasonableness of the 180-day closeout requirement. It expects to revise these policies in 2006. Adding to the agency’s closeout problems, 8 of the 34 closed grants we reviewed in the regions were not closed out properly. Specifically: Region 1 grant specialists had not adequately reviewed the indirect cost rate grantees submitted as part of their final financial status report which, in turn, led to improper closeout in 5 of the 10 files we reviewed. Reviewing the files’ final financial report checklist, we found instances in which the question on the checklist that addresses indirect cost rates had been left blank or had been answered incorrectly. This problem occurred, in part, because the grant specialists did not adequately review the work of student interns who initially reviewed the financial status reports and completed the checklists. These “noncore” employees were used to help reduce the grant specialists’ workload and the grant specialists were expected to review their work before they signed off on the checklist. In Region 5, one grant specialist’s file was missing the final financial status report, which is a key report that describes how the grantee spent the grant funds and whether any unspent funds remain that need to be deobligated. In Region 9, Lobbying and Litigation Certification Forms—whose purpose is to ensure that federal dollars are not spent for lobbying or litigation activities—were missing from two grant files. After waiting some time, the grant specialist decided to close out the grants without the forms. The grant specialist manual states that grant specialists are responsible for notifying the grants management office if the grantee has not complied with this certification requirement. EPA’s guidance states that inadequate file documentation, among other things, (1) violates the file management requirement that all significant actions must be documented, (2) provides an incomplete historical record of a grant project, (3) prevents staff from substantiating facts if a dispute arises, and (4) creates the appearance of poor grant administration and oversight. Furthermore, the guidance specifically states that the file should include evidence of closeout, including the final report or product. In Region 1, we also identified an accountability concern when grants were closed out by administrative project officers. An administrative project officer for a Performance Partnership Grant had not always received written approval from the technical contacts, who evaluated grantee documents before the administrative project officer certified that the grantee had met all the terms and conditions of the grant. According to a regional official, technical contacts at times tell the project officer that they have reviewed technical documents but do not provide written approval. Although the administrative project officer certified that grantees met their programmatic obligations, the administrative project officer was “uncomfortable” doing so without written approval from technical contacts that they had reviewed final documents. As with monitoring, without effective supervisory review of the grant and project officer files, grants may be improperly closed out. With more effective supervision, grants would be more likely to be properly closed out. EPA has taken steps to obtain environmental results from its grants, but its efforts are not complete. First, EPA included a performance measure in its Grants Management Plan for identifying expected environmental results in grant workplans. In 2004, EPA was far from meeting its performance target. Although EPA does not yet have final data for 2005, EPA officials told us that their preliminary data indicate they are closer to meeting this performance target. Second, EPA issued an environmental results policy, effective in January 2005 that for the first time requires EPA staff to ensure that grants specify well-defined environmental outcomes. However, EPA’s current performance measure does not take into account the new criteria for identifying and measuring results from grants established by the policy. EPA acknowledges that it has not yet fully identified better ways to integrate the agency systems for reporting on the results of grants. While EPA has taken these positive steps, OMB’s evaluations of EPA grant programs in 2006 indicate that EPA must continue its concerted efforts to achieve results from its grants. The Grants Management Plan established a performance measure for identifying environmental outcomes from grants: the percent of grant workplans that discusses how grantees plan to measure and report on environmental outcomes. In 2004, EPA was far from meeting its performance target. Although EPA does not yet have final data for 2005, an EPA official told us that their preliminary data indicate that the agency is closer to meeting this performance target of 80 percent for 2005. EPA also issued an environmental results policy in 2004, which was effective in January 2005, or about 2 years later than proposed in the Grants Management Plan. The policy is promising in that—for the first time—it requires EPA staff to ensure that grant workplans specify well- defined environmental outputs (activities) and environmental outcomes (results), which enables EPA to hold grantees accountable for achieving them. However, planning for grants to achieve environmental results, and measuring results, is a difficult, complex challenge. As we have reported, while it is important to measure the results of environmental activities rather than just the activities themselves, agencies face difficulties in doing this. Environmental outputs are inherently easier to develop and report on than environmental outcomes. The policy is also promising because, among other things, it (1) is binding on managers and staff throughout the agency; (2) emphasizes environmental results throughout the grant life cycle—awards, monitoring, and reporting; and (3) requires that grants be aligned with the agency’s strategic goals and linked to environmental results. To align grants with the agency’s strategic goals and link the grants to results, the policy requires for the first time that EPA program offices ensure that (1) each grant funding package includes a description of the EPA strategic goals and objectives the grant is intended to address and (2) the offices provide assurance that the grant workplan contains well- defined outputs and, to the “maximum extent practicable,” well-defined outcome measures. Outcomes may be environmental, behavioral, health- related, or programmatic in nature, and must be quantitative. EPA included the provision to “the maximum extent practicable” in the policy because it recognized that some types of grants do not directly result in environmental outcomes. For example, EPA might fund a research grant to improve the science of pollution control, but the grant would not directly result in an environmental or public health benefit. In June 2005, the EPA Inspector General found that the agency’s results policy was generally consistent with leading nongovernmental organizations that fund environmental projects and that emphasize grants performance measurements. EPA’s performance measure and the new results policy are positive steps, but the agency’s efforts to address results are not yet complete. Although EPA has issued an environmental results policy, its current performance measure does not take into account the new criteria for identifying and measuring results. EPA has identified the following seven criteria that grant agreements should meet and is using these seven criteria as the basis for assessing the implementation of the policy. That is, the agreements should include a description of how the grant is linked to EPA’s Strategic Plan, specify at least one EPA goal and its related objective that the project identify the appropriate program results code—a code applied to new grant awards that aligns the grant with EPA’s strategic goals and objectives, include an assurance that the program office has reviewed the workplan and that the workplan includes well-defined outputs and outcomes, include a requirement for performance reports from recipients, include well-defined outputs in the workplans, and include well-defined outcomes in the workplans. According to an Office of Grants and Debarment official, the results policy calls for outcomes that are not only well defined but that also include quantitative measures. However, recognizing the difficulty and complexity posed by applying such measures, the official said, for the purposes of assessment, EPA modified its criteria to include well-defined outcomes with or without quantitative measures. Since EPA has adopted new criteria for assessing environmental results from grants based on its environmental results policy, its current performance measure—the percentage of grant workplans that discuss how grantees plan to measure and report on environmental outcomes— may not be sufficient to assess the implementation of the policy. EPA’s current performance measure does not take into account the new criteria for identifying and measuring results from grants established by the policy. Establishing a new performance measure and target to reflect the new policy would enhance EPA’s ability to assess the agency’s effectiveness in implementing the policy. In addition, EPA continues to face difficulties in ensuring that its grants are achieving public health and environmental results. Specifically, EPA acknowledges that it has not yet fully identified better ways to integrate the agency systems for reporting on the results of grants. EPA does not have a systematic way of collecting information about the results of its grants agencywide. As stated in the results policy, the Office of Grants and Debarment convened a workgroup to (1) examine existing EPA systems for collecting results from grant programs, (2) identify better ways to integrate these systems, and (3) potentially amend the policy to reflect its findings. The workgroup has begun an inventory of existing EPA systems. Until recently, EPA recognized—but had not addressed in its results policy—the known complexities of measuring environmental outcomes: (1) demonstrating outcomes when there is a long lag time before results become apparent and (2) linking program activities with environmental results because of multiple conditions that influence environmental results. In April 2006, the Office of Grants and Debarment provided an online training course for project officers on environmental results with guidance on how to address these measurement complexities. Furthermore, OMB has found that EPA has problems in demonstrating results from its grants. Using its Program Assessment Rating Tool (PART), OMB annually evaluates federal programs in four critical areas of performance: program purpose and design, planning, management, and results, each scored from 0 to 100. OMB combines these scores to create an overall rating: effective, moderately effective, adequate, and ineffective. In addition, programs that do not have acceptable performance measures or have not yet collected performance data generally receive a rating of “results not demonstrated.” As table 10 shows, the PART ratings have found that some of EPA’s programs are “ineffective” or “results not demonstrated,” although there has been some improvement from 2004 through 2006. Despite this progress, a closer examination of the ratings for 2006 indicated that, with one exception, the scores for the results component were lower than the scores given to other components. (See table 11). While EPA has taken positive steps, OMB’s 2006 assessment indicates that EPA must continue its concerted efforts to achieve results from its grants. EPA has taken steps to manage grants staff and resources more effectively in four key areas: (1) analyzing workload; (2) providing training on grant policies; (3) assessing the reliability of the agency’s grants management computer database—the Integrated Grants Management System; and (4) holding managers and staff accountable for successfully fulfilling their grant responsibilities. Because much remains to be accomplished, management attention to these issues is still needed. As we reported in 2003 and found again in this review, regional grants managers and staff are concerned that staff do not have sufficient time to devote to effective grants management. They pointed out that the new policies increased the time needed to implement each step of the grants process, such as the more planned, rigorous approach now required for competing grants. However, one regional official pointed out that this increased workload has not been offset with an increase in resources or the elimination of other activities. Fulfilling an objective identified in the Grants Management Plan, in April 2005, an EPA contractor completed a workload analysis of project officers and grant specialists. The analysis showed that EPA had an overall shortage of project officers and grant specialists, expressed in full-time equivalents. However, the contractor recommended that before EPA adds staff, it take steps to improve the effectiveness and efficiency of its grants management operations. For example, the contractor recommended that EPA review its grant activities and assign “noncore” activities where possible to auxiliary federal or nonfederal staff to improve operations, freeing EPA staff to conduct their core work. It defined noncore activities to typically include those that are related to grant closeouts. The Office of Grants and Debarment asked the grant offices to prepare project officer workforce plans—due in 2006—that incorporate the workload analysis to promote “accountable” grants management. As outlined in the Grants Management Plan, EPA has developed a long- term grants management training plan. Under the plan, EPA continues to certify project officers for grant activities by requiring them to take a 3-day project officer course before they are allowed to conduct grant management activities, and thereafter take a refresher course to maintain their certification. To address the grant reforms, the agency provided additional training. For example, EPA held a grants management conference in 2004, attended by 465 EPA staff, which included workshops on new policies. In 2005, the Office of Grants and Debarment conducted agencywide training on the new competition policy. It also conducted training on the environmental results policy. However, according to EPA staff, the amount of training has not been sufficient to keep pace with the issuance of new grant policies. For example: A 2006 self-assessment conducted by one program office found that project officers and managers expressed frustration that both the pace and complexity of new policy requirements left project officers vulnerable because they were not properly trained in the policies. A 2005 Region 9 self-assessment found that the region’s project officers did not believe that they had received sufficient guidance from their programs in headquarters. A Region 1 official stated that the rapid pace of new policies and brief lead time between issuance and the effective date made it too difficult for the regions to adequately train staff on all the new policies related to grants management. Nevertheless, Region 1 developed a training course for its project officers on the award process to address new grant reform policies issued in 2005. However, only about 25 of the region’s 200 project officers attended the optional 90-minute course, although there were three opportunities to do so. Regional officials also noted that the grant reforms are changing the skill mix required of both project officers and grant specialists. According to a Region 5 official, the grant specialist was once a clerical position, but additional responsibilities required under the new grants policies indicates that a business degree or financial background would be helpful. Region 9 officials told us that traditionally project officers had technical and scientific skills. However, the grant reforms had increased the need for interaction with grantees, which required more skills in oral communications, organization, and analysis. An Office of Grants and Debarment official explained the agency is weighing what should be considered as the right skill mix for agency staff involved in grant activities. In 1997, EPA began developing the Integrated Grants Management System to better manage its grants, and EPA now also uses this database to inform the public and the Congress about its $4 billion investment in grants. Data quality problems in this database could impair the agency’s ability to effectively manage grants and provide accurate information. In 2005, we recommended that EPA conduct a comprehensive data quality review of its Integrated Grants Management System. EPA undertook a review, which it expects to be completed in 2006. EPA’s Grants Management Plan included an objective of establishing clear lines of accountability for grants management, including performance standards that address grants management responsibilities for project officers. As we reported in 2003, project officers did not have uniform performance standards; instead, each supervisor set standards for each project officer, and these standards may or may not have included grants management responsibilities. Later in 2003, EPA’s Assistant Administrator for the Office of Administration and Resources Management asked all senior resource officials to review the current performance standards of all employees below the senior executive service who had grants management responsibilities. This review was to ensure that the complexity and extent of these employees’ grants management duties were reflected in their performance standards and position descriptions. The Assistant Administrator asked senior resource officials to ensure that such standards were in place. The Office of Grants and Debarment is assessing the extent to which the guidance was implemented; the assessment is to be completed in May 2006. As we reported in 2003, the Office of Grants and Debarment faces some difficulties in holding managers and staff accountable for effective grants management. The office does not directly oversee many of the managers and staff who perform grants management duties, particularly the approximately 2,100 project officers in headquarter and regional program offices. This division of responsibilities makes it more difficult to hold these staff accountable for grants management. In 2005, EPA’s Inspector General reported that EPA was not holding supervisors and project officers accountable for grants management. Specifically: EPA does not have a process to measure an individual project officer’s performance in carrying out grants management duties. In practice, supervisors relied on project officers to inform them of grants management weaknesses. EPA managers and supervisors are not discussing project officer grants management responsibilities during end-of-year evaluations. Managers were not discussing project officers’ grants management responsibilities during year-end evaluations; and, if grant issues were addressed, the discussion focused on the grant recipient’s performance, rather than on the project officer’s performance. Supervisors provided various reasons for rating project officers without discussing grants management responsibilities, stating, for example, that the year-end evaluation should focus on problems or issues with grantee performance, and project officers’ responsibilities should be discussed at staff meetings or at other times through the year. EPA managers had not conveyed weaknesses from the agency’s internal reviews and self-assessments to project officers. EPA managers did not communicate weaknesses identified in internal reviews, such as a lack of documentation of cost reviews and ongoing monitoring, and supervisors were not aware of these identified weaknesses. Our review is consistent with the Inspector General’s findings. As previously discussed, EPA grants staff told us that their supervisors were not reviewing their grant files to determine compliance with grant monitoring policies. It is possible that the awarding, monitoring, and closeout problems we found would have been mitigated by effective supervisory review. In response to the Inspector General’s concerns, EPA issued a plan in January 2006 to ensure that the agency’s new performance appraisal system—Performance Appraisal and Recognition System—addresses grants management responsibilities. The new system requires that (1) appraisals of project officers and supervisors/managers include a discussion of grants management performance; (2) performance agreements and associated mid-year and end-of-year performance discussions focus on key areas of preaward reviews of nonprofit grantees, competition, post-award monitoring, and environmental results; and (3) performance discussions take into account the results of internal reviews, such as those conducted by the Office of Grants and Debarment, self- assessments, and performance measure reviews. For the 2007 performance appraisal process, EPA plans to establish a workgroup to develop final performance measures to assess the grants management performance of project officers and supervisors and plans to incorporate these measures into 2007 performance agreements. More broadly, to address the growing demands of the grant reforms and enhance accountability, the Office of Grants and Debarment formed a senior-level grants management council that cuts across the organization by including representatives from program offices, such as the Office of Water, and regional offices. The council is to help develop and implement new policies agencywide. Similarly, the regional offices we visited have formed grants management councils to coordinate and implement grant reforms with the region’s grants management office and various program offices. Despite the efforts of these various national- and regional-level offices, managers, and councils to identify problems and undertake corrective actions, some grants management problems still persist. For example, although some of the regions we visited had implemented checklists as internal controls to ensure the documentation of ongoing monitoring, the regions did not ensure they were actually completed. Closeout problems that were identified by EPA’s current performance measure have not been effectively addressed. About 3 years into its Grants Management Plan, 2003-2008, EPA has made important strides in achieving its grant reforms, particularly in competing a higher percentage of grants and trying to identify results from its grants. However, EPA has not resolved its long-standing problems in documenting ongoing monitoring and closing out grants. As it revises its management plan, EPA has an opportunity to tackle these continuing problems. Without adequate documentation of ongoing monitoring, EPA cannot be fully assured that grantees are on track to fulfilling the terms and conditions of their grants. Furthermore, the agency’s lack of documentation indicates weaknesses at all levels: staff do not always document their monitoring; supervisors do not always effectively review grant files; and managers are not always meeting their commitments to address known problems with lack of documentation. Despite the importance of ongoing monitoring, EPA has not created a performance measure and target for documenting monitoring, which should elevate the importance of ongoing monitoring to the agency. EPA is also not taking full advantage of its grants management database, which has a data field for documenting ongoing monitoring. That is, EPA does not require project officers and grant specialists to enter monitoring documentation information into its database. With the information in the database, EPA would be better able to determine that staff are meeting documentation requirements. Use of performance measures and targets for ongoing monitoring as well as the database could enhance accountability. EPA’s current performance measure and target for closing out grants is a valuable tool for determining if grants were ultimately closed out, but it is not a tool for determining whether grants were closed out within the 180- days now specified in EPA’s monitoring policy. EPA needs an accompanying measure that accurately reports whether grants are closed out within the 180 days or other standard EPA may establish. EPA’s current measure and target mask the fact that, agencywide, almost two- thirds of grants are not closed out within 180 days. A specific performance measure and target will enable EPA to oversee and manage the timeliness of grant closeout. We recognize that grantees’ late submission of required reports is a common problem that contributes to the lack of timeliness in closing out grants. Because fixing this problem on a case-by-case basis is difficult, the agency needs an overarching strategy to address it. Furthermore, in the three regions we visited, we found instances in which appropriate documentation was missing from the closeout files. While we do not know the extent of this problem agencywide, these problems could indicate a major weakness that EPA may need to address. In responding to a draft of this report, EPA acknowledged that it does have a problem in closing out grants properly. Finally, we note that inconsistencies in EPA’s monitoring and closeout policies may hinder EPA’s ability to close out grants in a timely fashion. While EPA has made strides in trying to identify and obtain results from its grants by issuing an environmental results policy, it has not yet established a performance measure and target that reflect the policy’s direction. Finally, the lack of effective supervision may have contributed to the problems we identified. EPA has issued a plan in January 2006 to ensure that the agency’s new performance system addresses grants management responsibilities. It is too early to tell whether this plan will effectively hold managers and staff accountable for grants management. As EPA revises its Grants Management Plan, the agency has an opportunity to strengthen the management of its grants. We recommend that the Administrator of EPA direct the Office of Grants and Debarment to take action in the following three areas: Ongoing monitoring. Develop a performance measure and a performance target for ongoing Consider requiring project officers and grant specialists to document ongoing monitoring in the agency’s grants database so that the managers can monitor compliance agencywide. Grant closeout. Establish a standard for the timely closeout of grants and ensure that EPA’s monitoring and other policies are consistent with that standard. Develop a performance measure and target for the grant closeout standard. Develop a strategy for addressing grantees’ late submission of required final documentation. Issue revised policies and procedures to ensure proper closeout of grants. Environmental results. Develop a performance measure and target that better reflects the new environmental results policy. We provided a draft of this report to EPA for review and comment. We received oral comments from EPA officials, including the Director of the Office of Grants and Debarment. Overall, EPA officials agreed with our recommendations, and they stated that the agency has begun to take steps to implement them and will incorporate them into the agency’s and Grants Management Plan, policies, and procedures. In addition, EPA officials provided some clarifying language for our recommendations, which we incorporated as appropriate. Furthermore, EPA officials acknowledged that proper closeout of grants is an agencywide problem that needs to be addressed. Based on this acknowledgement, we strengthened our recommendation to state that EPA needs to issue revised policies and procedures to better ensure the proper closeout of grants, rather than determine the extent of improper closeouts at the agency. EPA agreed. Finally, EPA provided additional information about the agency’s efforts to address the complexities of measuring environmental results and other clarifying comments, which we incorporated into this report, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 6 days from the report date. At that time, we will send copies of this report to the congressional committees with jurisdiction over EPA and its activities; the Administrator, EPA; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 3841. Key contributors to this report are listed in appendix II. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. This appendix details the methods we used to assess the progress the Environmental Protection Agency (EPA) has made in implementing its grant reforms. To assess EPA’s progress, we reviewed information from both headquarters and the regions. At headquarters, we reviewed EPA’s Grants Management Plan, 2003-2008 and EPA policies that address awarding, monitoring, and obtaining results from grants. We also reviewed reports on EPA’s grants management, including prior GAO reports; EPA’s Inspector General reports; the Office of Management and Budget’s Program Assessment Rating Tool; EPA’s internal management reviews, including comprehensive grants management reviews, post-award monitoring plans, grants management self-assessments from 2003 to 2005; an April 2005 workload analysis conducted by LMI, a government consultant; and a closeout analysis prepared for GAO by the Office of Grants and Debarment. We also interviewed officials in the Office of Grants and Debarment and at the Office of Water. In addition, we reviewed EPA performance metric information. These metrics are based on data from the agency’s Integrated Grants Management System, which is currently undergoing a data quality review, and the Grant Information and Control System, which has not undergone a data quality review. Given EPA’s ongoing data quality review of the Integrated Grants Management System—and because we present EPA’s performance metric data as documentary evidence and do not use it as the sole support for findings, conclusions, or recommendations—we did a limited reliability review of the two systems. Our assessment included (1) information from GAO’s prior data reliability assessment work on the two systems and (2) interviews with an Office of Grants and Debarment official about the data systems and data elements. We determined that the performance information we used is sufficiently reliable for our purposes. To assess the progress and problems EPA has experienced from a regional perspective, we selected EPA Office of Water programs under the Clean Water Act, at the Subcommittee’s request. We selected Wetland Program Development Grants (wetland grants) because it is a discretionary grant program—that is, EPA decides who receives the award and its amount, and the program is subject to competition. We selected Nonpoint Source Management Program grants (nonpoint source grants) because it is type of formula-based grant program—grants that are often awarded on the basis of formulas prescribed by law or agency regulation. We reviewed EPA’s progress at the regional level by selecting grants in 3 of EPA’s 10 regional offices: Region 1 (Boston), Region 5 (Chicago), and Region 9 (San Francisco). We selected these regions because collectively they represent a significant share of grant funding for the two programs we reviewed, geographic dispersion, and a significant share of Performance Partnership Grants among the regional offices. To ensure coverage of the grant life cycle—from awarding to closing out grants, we conducted a case study review of a nonprobability sample of these two programs. Specifically, we asked EPA Regions 1, 5, and 9 to provide lists of wetlands and nonpoint source grants awarded between January 1, 2004, and June 30, 2005, and those grants officially closed during this period. We targeted recently awarded and recently closed grants because the grant reforms began in 2002. To complete the case study, we reviewed two files per grant—the project officer file and the grant specialist file—using a detailed data collection instrument. All data entered in the data collection instrument was verified by a second party to ensure the accuracy and validity of each entry. Additionally, we conducted semistructured interviews with project officers and the grant specialists in order to understand the files. Overall, we reviewed the files for 32 active grants and 34 closed grants; and we interviewed administrative project officers, project officers, and technical contacts, and grant specialists about those files. We also interviewed senior resource officials and grants management office managers in the three regions we visited. We were limited in the number of grants we could review because the case study approach required multiple, detailed file reviews and interviews for each grant. Consequently, we selected a nonprobability sample of active and closed grants in the wetland and nonpoint source programs to review in each of the three regions. For active grants in our nonprobability sample, we sorted the grants by recipient type, project officer, and grant specialist to provide a distribution, and then randomly selected grants for review. For closed grants, we sorted the grants similarly, but we also selected grants based on the length of time it took to close out the grant. Because the case study design is nonprobabilistic, the findings are not generalizable to all grants in all regions. However, the case study design provides insights into regional grant activities for the two Clean Water Act programs in three regions, and it offers an in-depth perspective on some of the successes and continuing problems EPA faces in implementing its grants management reforms. Table 12 shows the population of wetland and nonpoint program grants in Regions 1, 5, and 9, and the number of those grants we reviewed. We experienced some limitations in conducting our review. For example, we were not able to assess the implementation of some EPA policies at the regional level because they had been issued too recently to assess during our file review time frame. We also found evidence in two of the three regions we visited that staff had added materials to their files after we had requested the files and before our review, despite the fact that we had taken precautions to avoid this situation. That is, we had asked the regions to inform staff not to add documents to files once they were requested, and we limited the time frame between our request for specific files and our review of the files. When we determined that these additions had occurred, we took mitigating steps to “restore” the grant files to their original state. Specifically, we checked the dates of documents to detect any widespread updating of files, asked all project officers and grant specialists we interviewed who were assigned to the grants in our sample whether they added anything to the file in preparation for the GAO visit, asked managers to tell their staff to point out materials added to the file, and, in one region, shortened the time between file request and our visit. To adjust for the alterations, we used a special code in our data collection instrument to denote “additions,” and later subtracted the information in our analysis. The file alteration mitigation strategies and the analysis adjustments afford us confidence in the accuracy and validity of our file review results. We conducted our work between February 2005 and April 2006, in accordance with generally accepted government auditing standards. In addition to the contact named above, Andrea Wamstad Brown, Assistant Director; Bruce Skud, Analyst-in-charge; Rebecca Shea; Lisa Vojta; Carol Herrnstadt Shulman; Omari Norman; David Bobruff; Matthew J. Saradjian; and Jessica Nierenberg made key contributions to this report. Grants Management: EPA Needs to Strengthen Efforts to Provide the Public with Complete and Accurate Information on Grant Opportunities. GAO-05-149R. Washington, D.C.: February 3, 2005. Grants Management: EPA Continues to Have Problems Linking Grants to Environmental Results. GAO-04-983T. Washington, D.C.: July 20, 2004. Grants Management: EPA Needs to Better Document Its Decisions for Choosing between Grants and Contracts. GAO-04-459. Washington, D.C.: March 31, 2004. Grants Management: EPA Needs to Strengthen Efforts to Address Management Challenges. GAO-04-510T. Washington, D.C.: March 3, 2004. Grants Management: EPA Actions Taken against Nonprofit Grant Recipients in 2002. GAO-04-383R. Washington, D.C.: January 30, 2004. Grants Management: EPA Needs to Strengthen Oversight and Enhance Accountability to Address Persistent Challenges. GAO-04-122T. Washington, D.C.: October 1, 2003. Grants Management: EPA Needs to Strengthen Efforts to Address Persistent Challenges. GAO-03-846. Washington, D.C.: August, 29, 2003. Environmental Protection Agency: Problems Persist in Effectively Managing Grants. GAO-03-628T. Washington, D.C.: June 11, 2003. Federal Assistance: Grant System Continues to Be Highly Fragmented. GAO-03-718T. Washington, D.C.: April 29, 2003. Results-Oriented Cultures: Creating a Clear Linkage between Individual Performance and Organizational Success. GAO-03-488. Washington, D.C.: March 14, 2003. Major Management Challenges and Risks: Environmental Protection Agency. GAO-03-112. Washington, D.C.: January 2003. Results-Oriented Cultures: Using Balanced Expectations to Manage Senior Executive Performance. GAO-02-966. Washington, D.C.: September 27, 2002. Environmental Protection: Grants Awarded for Continuing Environmental Programs and Projects. GAO-01-860R. Washington, D.C.: June 29, 2001. Environmental Protection: EPA’s Oversight of Nonprofit Grantees’ Costs Is Limited. GAO-01-366. Washington, D.C.: April 6, 2001. Environmental Protection: Information on EPA Project Grants and Use of Waiver Authority. GAO-01-359. Washington, D.C.: March 9, 2001. Environmental Research: STAR Grants Focus on Agency Priorities, but Management Enhancements Are Possible. GAO/RCED-00-170. Washington, D.C.: September 11, 2000. Environmental Protection: Grants for International Activities and Smart Growth. GAO/RCED-00-145R. Washington, D.C.: May 31, 2000. Environmental Protection: Collaborative EPA-State Effort Needed to Improve Performance Partnership System. GAO/T-RCED-00-163. Washington, D.C.: May 2, 2000. Managing for Results: EPA Faces Challenges in Developing Results- Oriented Performance Goals and Measures. GAO/RCED-00-77. Washington, D.C.: April 28, 2000. Environmental Protection: Factors Contributing to Lengthy Award Times for EPA Grants. GAO/RCED-99-204. Washington, D.C.: July 14, 1999. Environmental Protection: Collaborative EPA-State Effort Needed to Improve New Performance Partnership System. GAO/RCED-99-171. Washington, D.C.: June 21, 1999. Environmental Protection: EPA’s Progress in Closing Completed Grants and Contracts. GAO/RCED-99-27. Washington, D.C.: November 20, 1998. Dollar Amounts of EPA’s Grants and Agreements. GAO/RCED-96-178R. Washington, D.C.: May 29, 1996. | The Environmental Protection Agency (EPA) has faced challenges for many years in managing its grants, which constitute over one-half of the agency's budget, or about $4 billion annually. EPA awards grants through 93 programs to such recipients as state and local governments, tribes, universities, and nonprofit organizations. In response to concerns about its ability to manage grants effectively, EPA issued its 5-year Grants Management Plan in 2003, with performance measures and targets. GAO was asked to assess EPA's progress in implementing its grant reforms in four key areas: (1) awarding grants, (2) monitoring grantees, (3) obtaining results from grants, and (4) managing grant staff and resources. To conduct this work, GAO, among other things, examined the implementation of the reforms at the regional level for two Clean Water Act programs in 3 of EPA's 10 regional offices. EPA has made important strides in achieving the grant reforms laid out in its 2003 Grants Management Plan, but weaknesses in implementation and accountability continue to hamper effective grants management in four areas. First, EPA has strengthened its award process by, among other things, (1) expanding the use of competition to select the most qualified applicants and (2) issuing new policies and guidance to improve the awarding of grants. Despite this progress, EPA's reviews found that staff do not always fully document their assessments of grantees' cost proposals; GAO also identified this problem in one region. Lack of documentation may hinder EPA's ability to be accountable for the reasonableness of the grantee's proposed costs. EPA is reexamining its cost review policy to address this problem. Second, EPA has made progress in reviewing its in-depth monitoring results to identify systemic problems, but long-standing issues remain in documenting ongoing monitoring and closing out grants. EPA and GAO found that staff do not always document ongoing monitoring, which is critical for determining if a grantee is on track in meeting its agreement. Without documentation, questions arise about the adequacy of EPA's monitoring of grantee performance. This lack of documentation occurred, in part, because managers have not fulfilled their commitment to improve monitoring documentation. In addition, grant closeouts are needed to ensure that grantees have met all financial requirements, provided their final reports, and returned any unexpended balances. For fiscal year 2005, EPA closed out only 37 percent of grants within 180 days after the grant project ended, as required by its policy. EPA also did not always close out grants properly in the regional files GAO reviewed. Third, EPA has initiated actions to obtain environmental results from its grants, but these efforts are not complete. For example, EPA's 2005 environmental results policy establishes criteria grants should meet to obtain results. However, EPA has not established a performance measure that addresses these criteria. Furthermore, EPA has not yet identified better ways to integrate its grant reporting systems. Finally, the Office of Management and Budget's 2006 assessment indicates that EPA needs to continue its concerted efforts to achieve results from grants. Finally, EPA has taken steps to manage grant staff and resources more effectively by analyzing workload, providing training, assessing the reliability of its grants management computer database, and holding managers and staff accountable for successfully fulfilling their grant responsibilities. Management attention is still needed because, among other things, EPA has just begun to implement its performance appraisal system for holding managers and staff accountable for grants management. |
According to a 2014 United Nations Environment Programme report, the illegal trade in wildlife has been estimated by different sources to be worth between $7 billion and $23 billion annually. The report also indicates that poached African elephant ivory, just one of many wildlife products, may represent an end-user street value in Asia of an estimated $165 million to $188 million per year. According to a 2012 joint report by the World Wildlife Fund, a conservation organization, and Dalberg, a strategic consulting firm, the price of rhino horn had reached approximately $27,000 per pound—which, at that time, was twice the value of gold and platinum and more valuable on the black market than diamonds and cocaine. Wildlife trafficking threatens iconic species, including elephants and rhinos in Africa. According to the U.S. Fish and Wildlife Service (FWS), before 1900, black rhinos lived throughout most of sub-Saharan Africa, but from 1970 to 1992, rhino populations declined 96 percent. More recently, from 2007 to 2015 in South Africa, poachers killed 5,061 rhinos, according to the government of South Africa (see fig. 1). Currently, an estimated 25,000 rhinos remain on the continent, according to FWS. According to the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), the number of elephants killed each year has reached levels deemed unsustainable. The illicit ivory trade has grown by more than three times since 1998, and elephants are being killed faster than they can reproduce. From 2002 to 2011, the total population of forest elephants decreased by an estimated 62 percent across central Africa. According to FWS, from 2010 to 2012, an estimated 100,000 elephants were killed for their ivory, an average of approximately 1 every 15 minutes. The agency also reported that poaching continues at an alarming rate and is at its highest level in decades. Specifically, it reported that the current rate of decline is unsustainable and puts the African elephant at risk of extinction. Elephants are under threat even in areas that were once thought to be safe havens. During our fieldwork in Africa, we observed an elephant that had been shot and died in a protected area of northern Kenya, illustrating the nature of the challenge. According to the Strategy, while the United States is among the world’s major end markets for wildlife trafficking in general, Africa has become one of the largest sources of animal and plant species to supply criminal networks trading to Asia. As one nongovernmental organization (NGO) reported, wildlife products illegally leave the African continent by air or by sea through increasingly sophisticated routes and concealment methods. In Asia, increased demand for ivory and rhino horn stems from a rapidly expanding wealthy class that views these commodities as luxury goods that enhance social status, as reported in the Strategy. The flow of ivory illustrates the Africa-Asia nexus. According to a report by an NGO that works with U.S. government agencies on CWT, the primary axis for the illicit ivory trade is from Africa to East Asia, through the international container shipping system. The majority of shipments exit Kenya and Tanzania bound for China, Thailand, and Vietnam. Significant ivory seizures occur in Malaysia and Singapore due to their role as transshipment hubs. A 2016 United Nations Office of Drugs and Crime (UNODC) report indicates that over 70 percent of the ivory seized from 2009 to 2013 was found in large shipments of raw ivory. Figure 3 shows sources and destinations of ivory seizures, based on seizure data from the UNODC, with arrows representing repeated indications of a source- destination pairing. In 2013, President Obama issued an executive order that established the Task Force and charged it with developing a strategy to guide U.S. efforts on CWT. Figure 4 outlines selected U.S. government CWT actions from 2013 to 2016. In February 2014, the White House released the Strategy, which lays out guiding principles and strategic priorities for U.S. efforts to stem illegal trade in wildlife. In February 2015, the Task Force released the Implementation Plan for the Strategy, which identifies a range of objectives and agency roles. For each objective, the Implementation Plan designates one or more lead and participating agencies, departments, or offices. One objective, for example, is to achieve a near-total U.S. ban on trade in elephant ivory and rhino horn. The lead agency designated for this objective is DOI, working through FWS; the participating agencies are DOJ, the Department of Commerce through its National Oceanic and Atmospheric Administration, the Department of Homeland Security (DHS), and the Office of the U.S. Trade Representative. The Implementation Plan identifies a total of 24 objectives categorized under three strategic priorities: Strengthen Enforcement; Reduce Demand for Illegally Traded Wildlife; and Build International Cooperation, Commitment, and Public-Private Partnerships. In June 2015, FWS, in coordination with wildlife and conservation partners from government, NGOs, and the private sector, hosted its second major ivory crush event to educate consumers and to send a message to ivory traffickers that the United States will not tolerate this illegal trade. One ton of ivory seized during an FWS undercover operation, plus other ivory from the New York State Department of Environmental Conservation and the Association of Zoos and Aquariums, was crushed in Times Square, New York City. In June 2016, FWS finalized a rule that, according to FWS, established a near-total ban on the domestic commercial trade of African elephant ivory. The rule prohibits the import and export of African elephant ivory, with limited exceptions. In March 2016, the Task Force issued an Annual Progress Assessment (APA), which describes accomplishments related to the Implementation Plan. In addition, Congress is considering multiple CWT legislative proposals. For example, in November 2015 and September 2016, the House and Senate, respectively, passed legislation which aims, among other things, to support global antipoaching efforts, strengthen the capacity of partner countries to counter wildlife trafficking, and designate major wildlife-trafficking countries. From fiscal year 2014 to 2016, Congress directed that not less than $180 million be made available to combat wildlife trafficking (see fig. 5). While annual appropriations acts directed that a minimum amount be made available to combat the transnational threat of wildlife poaching and trafficking in each fiscal year from 2014 to 2016, determining how much agencies have obligated to CWT efforts is challenging. According to agency officials, this is due in part to the inherently interdisciplinary nature of CWT, which involves development conservation, domestic conservation, local law enforcement, combating transnational crime, and demand reduction. According to USAID, extrapolating the CWT component of obligations is not possible in accounting terms, but programs that include CWT goals and funding are managed to achieve intended CWT objectives. However, some agencies have attempted to make informed estimates of CWT funding, based on a specific USAID- State definition, and have identified CWT as a key issue in their budget justifications. Agency officials also told us that they use different methodologies to identify CWT funding because CWT activities often are part of programs that have multiple goals, and funding stems from different authorizations. While criminal elements of all kinds, including some terrorist entities and rogue security personnel, are involved in poaching and transporting ivory and rhino horn across Africa, transnational organized criminals and networks are the driving force behind wildlife trafficking, according to reports we reviewed and agency officials we spoke with in the United States and Africa. A 2016 UNODC report states that wildlife trafficking is increasingly recognized as a specialized area for transnational organized criminals and a significant threat to many animal species. A report by an NGO that works with U.S. government agencies on CWT analyzed the flow of ivory and found that a relatively narrow logistics and distribution chain suggests collusion of transnational criminal organizations. In addition, a representative of the U.S. Intelligence Community and other agency officials in Washington, D.C., indicated that organized criminal groups that have the scale and sophistication to conduct illicit trade internationally are the main actors responsible for moving large volumes of wildlife products across the world. Agency officials we spoke with in Africa also told us that because such criminal organizations have global links and the desire to earn money by any means, they play the major role in wildlife trafficking. As of July 2016, State’s Transnational Organized Crime Rewards Program, which authorizes rewards for certain information regarding members of significant transnational criminal organizations, identified the Xaysavang Network as an international wildlife-trafficking syndicate that facilitates the killing of elephants, rhinos, and other protected species for products such as ivory and rhino horn. Agency officials we spoke with expressed varying perspectives on the degree of terrorist group involvement in wildlife trafficking. Reporting from NGOs is also mixed. Agency officials told us that the differences in views may be due to a range of reasons, including a lack of a common definition for and usage of the term “terrorist group,” lack of reliable evidence, and the tendency for different types of criminal activities to blend together. State applies a specific definition to designate Foreign Terrorist Organizations (FTO), criteria for which include that the organization must be a foreign organization, must engage in terrorist activity or retain the capability and intent to engage in terrorist activity or terrorism, and must threaten the security of U.S. nationals or the national security of the United States. State has designated the following, among others, as FTOs: al Qaeda, al Qaeda in the Islamic Maghreb, al-Shabaab, and Boko Haram. A senior State official publicly testified that al-Shabaab is directly or indirectly (through taxation of goods moving through areas they control) involved in wildlife trafficking. NGOs also reported that al- Shabaab has been actively buying and selling ivory as a means of funding their militant operations. However, another report from an NGO found flaws with the al-Shabaab–ivory nexus. In addition, some State and other agency officials we spoke with suggested that evidence linking FTOs to wildlife trafficking is generally inconclusive, due in part to lack of specific, reliable evidence. According to State, Janjaweed and the Lord’s Resistance Army are not FTOs, but agency officials told us these organizations are commonly referred to as terrorist groups. The Lord’s Resistance Army, an armed group that operates in several countries of central Africa, uses proceeds from elephant poaching to support its illicit activities, according to agency officials. Various reports from U.S. agencies and NGOs also implicate Janjaweed, a group of Sudanese Arab militias, as active in wildlife trafficking. The 2016 UNODC report suggests that, in general, it is difficult to see how African terrorist groups are making large sums of money by poaching elephants for ivory in areas they control. Agency officials told us that terrorist groups would engage in wildlife trafficking if it presented a practical opportunity to generate revenue, and they said that some are so engaged, but exactly where and to what extent is difficult to determine. Moreover, activities of criminal elements blend together, a condition referred to as “convergence” in law enforcement. Criminal organizations can exploit the same weaknesses—corrupt institutions, porous borders, unstable regions—to overlap and blur linkages to their illicit undertakings, including wildlife trafficking. This, combined with the fact that illegal activity by nature is clandestine, makes it difficult to determine the extent to which terrorist groups are involved in wildlife trafficking, according to agency officials. Wildlife trafficking contributes to instability and violence, with corruption playing a major role, according to reports we reviewed and agency officials we spoke with in the United States and Africa. A 2013 report from the Office of the Director of National Intelligence (ODNI) found that systemic corruption enables the illicit ivory and rhino horn trade and that the trade exacerbates corruption by making high-value illegal products available to influential individuals along the supply chain, from rangers to customs officers, police, and the military. By inducing widespread movement of armed poachers and traffickers, ivory and rhino horn trade also exacerbates border insecurity, particularly across porous borders, according to the ODNI report. As an example of this kind of condition, U.S. agency and South African officials in Kruger National Park, a key rhino-poaching ground, told us that poachers from Mozambique cross the South African border to hunt for wildlife and that deadly gun battles occur on an ongoing basis. In addition, according to international organization and NGO reports, an estimated 1,000 rangers were killed worldwide over the decade from 2004 to 2014, which on average means that 1 ranger died every 4 days during that period. On our visit to northern Kenya, we met with an antipoaching ranger patrol unit (see fig. 6). The commander told us that a team member was recently killed in the line of duty, demonstrating the risk that rangers face on the job every day. In addition, a high-level official in South Africa told us that a shoot-out involving poachers recently occurred in downtown Pretoria, the capital, indicating that wildlife trafficking-related violence can affect urban areas as well as remote parks. Wildlife trafficking also has adverse national and local-level economic impacts. The 2013 ODNI report found that the illicit trade in ivory and rhino horn arguably weakens macroeconomic and fiscal stability, deters investment, contributes to income inequality, and hinders growth at all levels of an economy. Tourism revenues are particularly threatened by unchecked poaching, according to the report. Agency officials in Africa told us that wildlife tourism provides a significant source of income. Local communities in particular suffer when poaching occurs, because, among other things, it reduces already limited economic opportunities. For example, some of the villagers we spoke with in northern Kenya told us they had at one time been poachers but then became antipoaching rangers, because they saw first-hand that poaching produced a range of adverse impacts on their communities, reducing revenue from tourism while upsetting the delicate ecosystem balance and risking violent conflict with authorities or other poachers. The Task Force is co-chaired by DOI, DOJ, and State, and the Implementation Plan designates approximately 16 agencies, departments, or offices to play a role in taking action to combat wildlife trafficking. For the purposes of this report, we focused on the co-chair agencies and USAID, which is one of the agencies most heavily involved in CWT. Within DOI, FWS is responsible for conservation and management of biological resources, and it acts as the implementing organization for CWT efforts. State, responsible for managing U.S. foreign affairs, contributes to CWT-related diplomacy and law enforcement capacity-building activities. DOJ’s role in CWT involves prosecuting criminals and conducting judicial and prosecutorial training with partner nations. Under its development mission, USAID works with local communities and at the national level to conserve wildlife in Africa and around the world. Through grants and other means, FWS provides law enforcement assistance and supports conservation efforts that contribute to addressing wildlife trafficking. The Implementation Plan gives FWS a lead or participating role in each of the 24 objectives, making it the co-chair agency with the most responsibility for conducting CWT work. According to FWS officials, in fiscal year 2015, FWS awarded funding to 141 wildlife trafficking-related projects through its International Affairs Program, obligating approximately $20 million worldwide. This included $9.6 million of USAID funds for the Central Africa Regional Program for the Environment program, implemented by FWS and their partners; $7.2 million from FWS regional and species funds; and $2.6 million to counter wildlife trafficking in Southeast Asia, implemented by FWS through an interagency agreement with State. In addition, in the summer of 2015, FWS placed two law enforcement attachés in Africa to help build capacity in 16 partner nations. One attaché is based in Botswana and another in Tanzania, but their responsibilities are regional. The attaché in Botswana covers southern Africa and is responsible for nine countries: Angola, Botswana, Lesotho, Mozambique, Namibia, South Africa, Swaziland, Zambia, and Zimbabwe. The attaché in Tanzania is responsible for seven countries: the Democratic Republic of the Congo, Kenya, Madagascar, Malawi, Rwanda, Tanzania, and Uganda. The attachés provide countertrafficking expertise to embassy staff and work with host government officials to build law enforcement capacity. Their role may involve training foreign counterparts in conducting wildlife-focused criminal investigations, providing support for digital evidence collection and technical investigative equipment, or contributing directly to casework or criminal investigations of wildlife traffickers. FWS officials we spoke with in the field stated that the placement of personnel in-country has made a positive impact. Previously, for example, FWS would have had to rely on nonagency sources of information or send officials abroad temporarily to conduct work, and it could not be as active or involved in CWT efforts in Africa. Now, FWS is able to conduct or facilitate investigations on its own and can establish personal relationships with counterparts, significantly enhancing cooperation. FWS also manages species-specific conservation grant programs for elephants and rhinos and Africa regional grants that include CWT efforts. For example, a fiscal year 2015 grant aims to generate local support for the protection of elephants in and around Ruaha National Park, Tanzania, by conducting education and outreach programs in villages and operating a program for local residents to meet with park officials (see fig. 7). FWS also plays a role in the agency’s CWT efforts through DOI’s International Technical Assistance Program (ITAP). In East Africa, USAID supports ITAP’s work in priority areas, including wildlife law enforcement, CITES implementation, and information sharing. ITAP also collaborates with other organizations such as the U.S. Geological Survey, National Park Service, and Bureau of Land Management, in conducting field work on an as-needed and temporary basis to supplement efforts of the FWS attachés. For example, in 2016, after conducting assessments with funding from USAID, ITAP produced a Five Year Strategic Plan and Year One Work Plans for Tanzania and Uganda that will guide ITAP’s work in the region. State contributes to CWT-related diplomacy through public outreach and support of international organizations and also contributes to law enforcement capacity building by, for example, providing training and equipment to park rangers. According to State officials, State supports organizations that address wildlife trafficking, such as the International Consortium to Combat Wildlife Crime, which includes the CITES Secretariat, the International Criminal Police Organization (INTERPOL), UNODC, the World Bank, and the World Customs Organization. The consortium aims to strengthen wildlife law enforcement effectiveness through intelligence-led interdiction and advanced investigative methods. State also has conducted public outreach efforts in countries such as Kenya and Tanzania to raise awareness. For example, in Kenya, the U.S. ambassador participated in a gathering with Kenyan officials on World Wildlife Day in March 2016 and announced more than $4.1 million in new U.S. government assistance to support wildlife conservation and community development. As another example, in January 2015 in Tanzania, the U.S. ambassador met with game scouts—local villagers who help rangers and communities mitigate poaching by conducting patrols and alerting authorities—during a USAID-supported media tour in the Selous Game Reserve. With regard to law enforcement capacity building, since 2013 State’s Bureau of International Narcotics and Law Enforcement Affairs (INL) has awarded more than $6 million in bilateral CWT grants for work in Kenya and South Africa in addition to funding for regional programs. For example, in Kenya, INL awarded a grant to the Northern Rangelands Trust (NRT), a U.S. government-supported community conservation organization, to build a more effective ranger force that includes advanced classroom and field-based training and equipment. Through specialized and refresher training of mobile community policing teams, the grant aims to increase coverage of elephant range areas and strengthen ranger capacity in arresting poachers and dealers, recovering firearms and trophies, and ensuring the rangers’ own safety. In South Africa, INL provided approximately $1.3 million in grants during 2014 and 2015 to support the Endangered Wildlife Trust, which operates around Kruger National Park, a threatened rhino habitat (see fig. 8). Among other things, the trust’s INL-funded activities support local law enforcement capacity building, including an all-female antipoaching unit called the “Black Mambas” (see fig. 9). According to a trust representative, this group conducts antipoaching patrols, aided by smartphone technology that enables them to identify animals or potential issues in real time. While they carry no weapons, the Black Mambas patrol range areas, alerting authorities if they find poachers’ camps or anything suspicious. Black Mamba members told us that they patrol the perimeter, looking for the tracks of poachers and disturbances in fencing such as cut wires or other indications of possible intrusion. In addition, they said they trek through the interior to look for signs of poacher camps, poached animals, or snares used to catch bushmeat. According to a representative of the Endangered Wildlife Trust, these patrols have been effective because bushmeat poaching is down 78 percent since 2013. DOJ’s role in CWT, which is coordinated and led by the Environment and Natural Resources Division, involves prosecuting criminals and conducting prosecutor and foreign judicial training. For example, Operation Crash—a rhino horn and ivory-smuggling investigation led by FWS and prosecuted by DOJ—resulted in charges being brought in U.S. courts against nearly 40 individuals or businesses. As of September 2016, Operation Crash had led to at least 30 convictions, prison terms as long as 70 months, and forfeitures and restitutions as high as $4.5 million. According to DOJ officials, DOJ also regularly prosecutes individuals and businesses involved in the illegal importation of ivory into the United States, including, in recent years, prosecutions involving the smuggling of raw ivory, worked ivory carvings, and ivory inlayed items such as pool cues. In addition, with funding from State and assistance from USAID, DOJ is implementing a series of regional capacity-building workshops on wildlife trafficking for judges and prosecutors in Africa. The first workshop of the series took place in Livingstone, Zambia, in October 2015 for 32 judges and prosecutors from six southern African nations (Angola, Botswana, Malawi, Mozambique, Namibia, and Zambia). The second took place in Accra, Ghana, in June 2016 for 36 judges and prosecutors from five western African nations (Gabon, Ghana, Nigeria, Republic of the Congo, and Togo). The workshops provided training by subject matter experts from UNODC, antitrafficking NGOs, and other U.S. government agencies. Topics covered by the training included evidentiary and prosecutorial issues unique to wildlife-trafficking cases, as well as sessions on money laundering, asset tracing, and corruption issues. USAID combats wildlife trafficking by working with communities to help them conserve wildlife, particularly in Africa. USAID also works at the national level and with rangers and law enforcement personnel throughout the supply chain to strengthen capacity. In June 2015, USAID committed to starting more than 35 new CWT projects in 15 countries. Initiatives that address the supply side of wildlife trafficking include, among others, support for the Spatial Monitoring and Reporting Tool (SMART), a free software tool that enables village scouts and rangers to instantly capture GPS and observational data in the field, enhancing conservation efforts. According to USAID, for many years, rangers have collected monitoring data on paper that had to be sifted through to find relevant information, limiting its usefulness for planning and analysis. With SMART, rangers can digitally record and analyze information on poaching encounters, areas patrolled, and wildlife sightings to make protected area management more effective and efficient. For example, USAID has equipped and trained more than 400 rangers in the Democratic Republic of the Congo and Republic of the Congo on the use of GPS units, hand- held computers for data collection, and SMART. As a result, according to the Task Force’s 2015 APA, SMART patrols are now providing credible, actionable data on wildlife presence and threats, which park managers use to deploy ranger teams to high-intensity poaching areas. According to this assessment, as of March 2016, more strategic patrols and other measures across eight landscapes in the two countries resulted in the destruction of nearly 1,800 snares and traps, the confiscation of 2,800 firearms, the arrest of 416 poachers, and numerous seizures of elephant tusks and other wildlife products. In the countries we visited, USAID officials told us that USAID takes a holistic approach to CWT at the national and community level, with efforts aimed at improving livelihoods, governance, and security. Project sites we visited faced common issues, including poverty and conflict, which provide fertile conditions for poaching. USAID aims to address these root causes in parallel with CWT efforts by focusing on improving equity, transparency, and livelihoods in local communities. For example, to incentivize villagers who live in and around range areas to protect wildlife, USAID works to establish programs that increase revenue generated from wildlife tourism and safaris. The community is then to share the income and use it to meet public demand for health care, education, and other critical needs. The following are illustrative examples from our fieldwork. In Kenya, USAID has provided assistance to the NRT for years, according to a USAID official, and in September 2015 signed an agreement worth approximately $20 million over 5 years that aims to reduce wildlife trafficking as one of five key goals. According to USAID officials and NRT representatives, the trust has contributed to bringing peace, stability, and reduced poaching in regions of northern Kenya. For example, according to NRT representatives, the rate of illegal killing of elephants declined from 81 percent in 2012 to 46 percent in 2014. USAID and NRT representatives told us that while the area protected by the trust is vast and poaching remains an ongoing concern, the overall success of its CWT effort has been driven by a combination of supportive donors, strong security capabilities, and governance mechanisms that communities perceive to be fair, equitable, and transparent. As a result, they said that communities realize the benefits of tourism and receive revenue from wildlife and therefore are motivated to protect it. In South Africa, USAID supports the Resilience in the Limpopo Basin Program, a $14.5 million effort started in 2012 aimed at improving the transboundary management of an area that spreads over parts of South Africa, Botswana, Mozambique, and Zimbabwe. Almost half of the area is within South Africa, which relies heavily on the basin to support agriculture, industry, and tourism. Protected areas across the basin, including Kruger National Park, exhibit a unique biodiversity and are home to several vulnerable species for which poaching is a key threat. One of the program’s three primary objectives is to conserve biodiversity and sustainably manage high-priority ecosystems. According to USAID and implementing partner officials, one activity with that aim involves improving the livelihoods of villagers who live around Kruger National Park by increasing income opportunities from wildlife, thus strengthening the incentive for them to protect wildlife. For example, they said that USAID helped villagers work with park authorities to clarify agreements on how to share revenue generated from wildlife in the area, ultimately resulting in more transparency and increased income for the villagers. In Tanzania, USAID supports Promoting Tanzania’s Environment, Conservation, and Tourism, a $14.5 million 5-year project that aims to enhance the country’s capacity to combat wildlife poaching and trafficking as one of its focal areas. Started in April 2015, the project aims to improve the abilities of park, customs, and judiciary authorities by, for example, training customs officials on the detection of wildlife products. In addition, in September 2015, USAID launched the Endangered Ecosystem of Northern Tanzania (EENT) Project, which, according to USAID, plans to provide $12.4 million over 5 years to support and secure the long-term conservation and resilience of more than 6 million acres of wildlife habitat. One of EENT’s four strategic goals is to improve wildlife protection and land and habitat management. EENT implementing partners emphasized the importance of an integrated and holistic approach and told us that they are working with communities to improve livelihoods, develop conservation incentives, and build capacity. According to representatives of these implementing partners, one activity that has made a positive impact is strengthening security, particularly through the use of canine patrols, which significantly enhance detection and tracking capabilities (see fig. 10). We observed an exercise demonstrating that properly trained dogs can follow a scent in the field and quickly and accurately lead rangers to the source. According to the implementing partner representatives, the addition of canine units has enabled rangers to find, identify, and arrest poachers who otherwise would have escaped detection. In addition to the co-chair agencies and USAID, more than a dozen other federal agencies, departments, or offices contribute to CWT efforts. Department of Defense (DOD) officials, for example, told us that DOD plays a role in CWT through its capacity-building efforts. For example, DOD has provided training to partner nations’ enforcement agencies in various skills and tactics, including those involved in countering public corruption, running basic criminal investigations, and conducting border patrols to counter illicit trafficking. Specifically, in March 2015, DOD personnel, with State Africa Bureau funding, delivered antipoaching training in weapons-handling procedures, combat marksmanship, patrolling, offensive tactics, land navigation, and mounted operations for more than 40 Tanzanian rangers in the Selous Game Reserve. In addition, in Gabon, DOD trained Gabonese park rangers in infantry tactics to enhance their capacity to thwart trafficking in ivory and other wildlife products. DOD officials told us that they are continuing to explore ways in which DOD can help address wildlife trafficking, particularly in Africa. DHS officials told us that, like DOD, DHS contributes to CWT through capacity building, providing training to partner nations, and working alongside foreign counterparts to support CWT investigations and enforcement initiatives. For example, in March 2016, DHS’s Customs and Border Protection provided, with funding from USAID and DOD, elephant ivory- and narcotics-sniffing canine units and trained handlers for the air and sea ports in Dar es Salaam, Tanzania. We met with Tanzanian port officials who indicated that having dogs on site would improve their ability to detect smuggled wildlife products in shipping containers. Also, in 2015, according to U.S. Immigration and Customs Enforcement’s Homeland Security Investigations officials, the attaché in Pretoria supported the South African Police Service on covert operations involving a proposed sale of a drug that is often used to immobilize elephants, rhinos, and other large mammals. They said that the operations led to the arrest of five wildlife poaching conspirators, some with links to transnational organized criminals, and the seizure of items used in the proposed killing of a rhino. For its part in CWT, Department of the Treasury (Treasury) officials told us that the department analyzes available information and, if applicable, exercises U.S. sanctions authorities against individuals and entities that engage in wildlife trafficking. For example, in March 2016, Treasury designated the Lord’s Resistance Army as subject to Executive Order 13667, which blocks any and all transactions involving the U.S. property of persons contributing to the conflict in the Central African Republic. In making this designation, Treasury noted that the Lord’s Resistance Army had engaged in illicit diamonds trade, elephant poaching, and ivory trafficking for revenue. In addition, Treasury represents the United States as an observer to the Eastern and Southern Africa Anti-Money Laundering Group and, as a member of the Asia/Pacific Group on Money Laundering, two of nine Financial Action Task Force-style regional bodies that uphold international standards on anti-money laundering and countering terrorism financing. Treasury officials told us that due to mission priorities, limited staff resources are dedicated to CWT issues, although the Implementation Plan designates Treasury as the lead or a participating agency in 8 of the 24 objectives. However, if a significant amount of relevant information emerges on wildlife trafficking, Treasury officials said they could take immediate action. We found that State and USAID generally follow selected elements of widely accepted monitoring standards for CWT-related programs in the countries we visited—Kenya, South Africa, and Tanzania. As shown in table 1, we reviewed documentation for one State program and one USAID program in Kenya, a State program in South Africa, and a USAID program in Tanzania. For the State programs in Kenya and South Africa and the USAID programs in Kenya and Tanzania, we assessed the agencies’ documentation related to monitoring against selected key elements of widely accepted monitoring standards that we determined can be applied to foreign assistance programs. We identified the widely accepted monitoring standards through a review of Standards for Internal Control in the Federal Government; the Government Performance and Results Act (GPRA) Modernization Act of 2010; and Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards. Table 2 summarizes our overall assessment of the select CWT programs’ monitoring activities. Appendix I provides greater detail on our selection and analysis of these programs. For the four programs we reviewed, we found that State and USAID generally follow the nine key elements of widely accepted monitoring standards listed in table 2, with some exceptions. Both agencies’ programs fully implement procedures to periodically collect and analyze data on performance indicators, define roles and responsibilities of personnel responsible for monitoring, and submit periodic and annual reports. Fully adhering to these three elements of widely accepted monitoring standards helps enable the agencies to promote data-driven analysis and regular reporting of results, which could help identify any needed course corrections in a timely manner. We found that the USAID program in Kenya partially implements the element of ensuring appropriate qualifications for staff conducting monitoring, while the State programs in Kenya and South Africa and the USAID program in Tanzania fully implement this element. Implementing this element helps to ensure that staff have the expertise to exercise sound judgment in overseeing the programs. We also found that all four programs at least partially implement the element of creating a monitoring plan and at least partially implement the element of identifying funding sources for monitoring; USAID’s program in Tanzania was the only one of the four that fully implemented both these elements of the monitoring standards. Three of the four programs partially created a monitoring plan or partially identified funding sources for monitoring, based on documentation we received. For example, documentation for the two programs that partially created monitoring plans provided some detail on how performance would be monitored through activities such as establishing time lines and tracking performance indicators; however, the documentation contained little or no information on how responsible parties within the agencies, such as a grants officer or grants officer representative, would ensure a systematic review of monitoring efforts. In addition, we found that both of State’s programs only partially identify funding sources, because program documentation lacked funding information specific to monitoring activities. As table 2 (above) shows, the results of our analysis for the remaining three elements of monitoring standards were mixed. Only USAID’s program in Tanzania fully implemented all three by (1) implementing data quality assurance procedures on performance indicators, (2) validating the implementing partner’s performance through site visits and other activities, and (3) considering monitoring information in making management decisions. Conducting data quality procedures helps provide assurance that the likelihood of significant errors or incompleteness is minimal and that the data can be used for their intended purposes. Site visits, along with other methods of verification, can help address or avoid problems that programs sometimes experience, such as delays in program start-up, untimely submission of progress or financial reports, or allegations of misuse of funds. In Kenya, each agency’s program only partially implemented this element. For example, State’s program in that country provided reports of telephone audits but no additional documentation such as photographs or other evidence to support validation of program implementation. Considering performance information in making management decisions facilitates program improvement by providing data-based evidence for making adjustments. USAID’s Toolkit for Periodically Collecting and Analyzing Data on Performance Indicators USAID created an impact measurement toolkit in 2015 to be a primary resource to improve action and accountability in USAID’s efforts to combat wildlife crime at the programmatic level. According to a USAID official, the Promoting Tanzania’s Environment, Conservation, and Tourism (PROTECT) program was the first to apply the toolkit in designing its monitoring and evaluation plan, which includes baseline values and targets for each indicator and a time line for specific monitoring and evaluation activities. A representative of PROTECT told us that the monitoring and evaluation plan will be revised each year to preserve flexibility. For example, one revision already made was to reduce the number of performance indicators originally proposed in the program’s monitoring and evaluation plan to improve program results. Agencies are taking steps to measure progress. One example is the CWT toolkit, which USAID created in 2015 (see sidebar). USAID officials stated that they hope the toolkit serves as a resource for other Task Force agencies as well. According to State officials, through efforts such as the toolkit, Task Force agencies will continue to strengthen the monitoring and evaluation sections of their programs as the agencies continue to improve efforts of combating wildlife trafficking and lessons learned from ongoing programs. While State and USAID monitor CWT programs to some degree in the countries we selected for this review, the agencies have not yet conducted CWT-specific evaluations in those countries. Both State and USAID officials told us that it is too early to conduct such evaluations, given that appropriations for CWT-specific activities only began in fiscal year 2014 and no CWT-specific programs in the three countries we focused on had been completed yet. Officials also indicated that they plan to conduct evaluations of major CWT-specific activities when these are completed. At our request, however, USAID identified a total of six programs it had recently supported in Kenya, South Africa, and Tanzania that had some element of CWT-related activity as well as a final evaluation report available to assess (see table 3). State was not able to identify any CWT-related programs with an available evaluation report. We found that of the six evaluation reports we reviewed, none included CWT-related efforts as a primary goal or objective of the evaluation. Generally, we found that the six evaluated broader conservation goals of each program without focusing on specific CWT-related activities. However, some of the evaluation reports provide limited CWT information. For example, the evaluation report for the Scaling-Up Conservation and Livelihoods Efforts in Northern Tanzania Project indicated that the project had accomplished its goal to continue building capacity for an antipoaching unit in two of its wildlife management areas. This specific output was tied to the program’s broader goal to deliver transformational conservation and economic impact. Another example is the Africa Biodiversity Collaborative Group, a coalition of U.S.-based international conservation organizations that operate field programs in Africa. The evaluation report stated that the group had been highly effective in creating new conservation partnerships, some of which led to faith leaders uniting against illegal wildlife trade. The Spatial Monitoring and Reporting Tool for law enforcement was cited as an example showing how one of the program’s innovative conservation practices had been more widely adopted in Africa as a result of the group’s collaborative work, which helped to ensure that wildlife patrols were carried out. While the interagency Task Force, co-chaired by State, DOI, and DOJ, provides some information about progress, it lacks performance targets, making effectiveness difficult to determine at the strategic level. The Implementation Plan and the 2015 Annual Progress Assessment (APA) describe objectives, metrics, and accomplishments. Under three strategic priorities, the Implementation Plan identifies 24 objectives and ways to measure progress for each. For example, one objective is to develop and broadly disseminate cost‐effective analytical tools and technological solutions to support wildlife trafficking investigations and prosecutions. The plan outlines two ways to measure progress for this objective: new inspection and interdiction technologies developed and applied, and forensic tools, capacity, and networks developed. In reporting on progress related to this objective, the APA states that USAID launched the Wildlife Crime Tech Challenge to generate new science and technology solutions for detecting transit routes, strengthening forensic evidence, reducing consumer demand, and tackling corruption along the supply chain. According to the announcement of winners, one awardee in South Africa developed a product that enables the tracing of rhino horn through individualized DNA profiling, thus providing a means of linking a sample of trafficked product back to a specific crime. Such information describes accomplishments that relate to objectives, but the Task Force does not provide targets, in the APA or elsewhere, that would enable comparison of actual performance against planned results. As we have previously reported, a fundamental element in an organization’s efforts to manage for results is its ability to set performance goals with specific targets and time frames that reflect strategic goals and to measure progress toward its performance goals as part of its strategic planning efforts. Such performance measurement allows organizations to track progress in achieving their goals and gives managers crucial information to identify gaps in program performance and to plan any needed improvements. In addition, according to Standards for Internal Control in the Federal Government, managers need to compare actual performance against planned or expected results and to analyze significant differences. Furthermore, internal control helps managers achieve desired results through effective stewardship of public resources. Having targets would allow the Task Force to more fully demonstrate the commitment articulated in its Implementation Plan: to continually evaluate progress, both by assessing the extent to which the Task Force is able to achieve the specific objectives identified in the plan and by looking more broadly at the effectiveness of those objectives toward achieving strategic priorities and the ultimate goal of ending wildlife trafficking. The Task Force identified a range of reasons why it does not have targets, including that many potential indicators are metrics with limited or uneven results cannot be attributed solely to U.S. government actions and are dependent on continued combined global effort; results often require years to document accurately; availability of data from the key developing countries; and reporting against metrics could downplay the contributions of other stakeholders, divert resources, and either risk oversimplification or confuse audiences with complicated explanations of the limitations of quantitative targets. We have highlighted strategies in our past work that agencies can use when faced with the challenge of having limited control over external factors that can affect a program’s outcomes. These strategies include selecting a mix of outcome goals over which the agency has varying levels of control; using data on external factors to statistically adjust for their effect on the desired outcome; and disaggregating goals for distinct target populations for which the agency has different expectations. Additionally, to help interpret the results of performance measures, we have also emphasized in our past work the importance of communicating adequate contextual information, such as factors inside or outside the agency’s control that might affect performance. In addition, Task Force agencies have provided performance targets for efforts facing similar challenges to measuring and reporting results. For example, the performance and accountability reports of State, USAID, DOI, and DOJ all provide targets for diplomatic, development, legal, and conservation-related activities that are complex and difficult to measure. Despite challenges associated with measuring progress against climate change, State and USAID provide quantitative targets for measuring results in their FY 2015 Joint Summary of Performance and Financial Information. In its 2016/17 Annual Performance Plan & 2015 Report, DOI identifies a target for status of international species. DOJ’s FY 2015 Annual Performance Report and FY 2017 Annual Performance Plan provides a target for protecting Americans from terrorism and other threats to national security—a complex, global challenge. In addition, a separate presidential task force, responsible for addressing species conservation of pollinators, identified a target that encompasses, among other things, international partners, long time periods, and factors outside the control of the U.S. government. Developing targets for CWT may not require significant resources or complicated analysis. For example, regarding the aforementioned objective to disseminate cost‐effective analytic tools and technological solutions, targets may include the following: Develop and apply x number of new inspection and interdiction technologies by z year. Develop w number of forensic tools, x level of capacity, and y networks by z year. Providing some basis for comparison would enable the Task Force to better understand the extent to which its accomplishments are meeting expectations. Wildlife trafficking, worth at least $7 billion annually, continues to push some protected and endangered animal species to the brink of extinction. Furthermore, wildlife trafficking can fuel corruption and criminal activity, leads to loss of both human and animal lives, and destabilizes communities that depend on wildlife for biodiversity and ecotourism revenue. Task Force agencies are helping combat wildlife trafficking through a variety of efforts; however, at the strategic level, the Task Force has not identified performance targets. Without such targets, it is unclear whether the Task Force’s accomplishments are meeting expectations, making it difficult to gauge progress and to ensure effective stewardship of public resources. For example, do those accomplishments represent a satisfactory level of performance, given the level of investment and expected results, or should resources be adjusted? Without targets, agencies risk reporting their progress merely as an annual description of successes and accomplishments. While important, these accomplishments alone do not provide accountability because they do not link back to targets, and there is no basis for comparison between actual and intended results. In addition, over time, such descriptions may lack continuity. It would be difficult to compare progress from year to year, if the Task Force reports different types of successes and accomplishments each cycle. To maximize resources available to address this problem, it is critical that the agencies involved continually assess the efficiency and effectiveness of their efforts so as to ensure that the most effective Task Force efforts are supported. By establishing targets, the Task Force would be able to generate and communicate more meaningful performance information that would help them identify performance shortfalls and the best options for making improvements in their efforts against wildlife trafficking. To provide a basis for comparing actual results with intended results that can generate more meaningful performance information, we recommend that the Secretaries of the Interior and State and the Attorney General of the United States should jointly work with the Task Force to develop performance targets related to the National Strategy for Combating Wildlife Trafficking Implementation Plan. We provided a draft of this report for review and comment to the Departments of Defense, Homeland Security, the Interior, Justice, State, the Treasury, and USAID. The Departments of the Interior, Justice, and State, and USAID agreed with our recommendation. Written responses from Department of the Interior, Department of State, and USAID are reproduced in appendixes II, III, and IV. All agencies provided us with technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and to the Secretaries of Defense, Homeland Security, the Interior, State, and the Treasury; the Attorney General of the United States; the Administrator of USAID; and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612, or gianopoulosk@gao.gov. Contact points for our Offices of Congressional Relations and of Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This report focuses on the efforts of the Presidential Task Force on Wildlife Trafficking (Task Force) to combat wildlife trafficking of large animals in Africa and supply side activities, which include poaching, transport, and export of wildlife and wildlife parts. The report examines (1) what is known about the security implications of wildlife trafficking and its consequences; (2) actions Task Force agencies are taking to combat wildlife trafficking; (3) Department of State (State) and U.S. Agency for International Development (USAID) monitoring and evaluation efforts in select countries; and (4) the extent to which the Task Force assesses its progress. To obtain information for background and context, we reviewed information related to rhinoceros and elephant products and poaching from both U.S. and foreign government sources as well as from international organizations. We also examined the flow of illegal ivory and seizure data. We did not assess the reliability of these data. To address our objectives, we met with Task Force agency officials and nongovernmental wildlife trafficking experts recommended by agency officials and other nongovernmental wildlife trafficking experts in Washington, D.C., and conducted fieldwork in Kenya, South Africa, and Tanzania. We selected these countries using a combination of criteria: (1) Since fiscal year 2013, each country has received at least $1 million annually in U.S. government funding for efforts related to combating wildlife trafficking (CWT); (2) CWT activities are underway in each country and are expected to make a significant impact; and (3) each country has the presence of at least two U.S. government agencies conducting CWT work. This sample is not generalizable to all the countries in which the United States has CWT-related programs. While in each country in Africa, we interviewed officials who served on each embassy’s CWT working group, which generally included officials from State, USAID, and the Departments of Defense, Homeland Security, the Interior, and Justice. The Department of the Treasury did not have an attaché in any of the three countries we visited. We also interviewed representatives from host governments responsible for the management of natural resources and parks; nongovernmental organizations involved in implementing U.S. government programs related to conservation, law enforcement, and other CWT objectives; and community members that live in or around protected areas that are directly impacted by wildlife trafficking. To examine what is known about wildlife trafficking and its consequences, particularly security implications, we reviewed more than 15 relevant reports and other information from U.S. agencies and international and nongovernmental organizations. We selected these reports and information from organizations that had produced wildlife trafficking analysis, that had worked with the U.S. government on CWT activities, or that had been recommended to us by officials or experts. We also interviewed representatives from these organizations in the United States and in Africa. To address actions Task Force agencies are taking to combat wildlife trafficking, we reviewed relevant documentation and information, including agency and implementing partner documentation of CWT- related projects, programs, and grants. We also interviewed agency officials in Washington, D.C., and in Africa. During our fieldwork, we visited project sites and met with host government officials, implementing partner representatives, park authorities, security units, and community members. To address CWT monitoring and evaluation efforts, we selected State and USAID programs in the three countries we visited that were at or near completion or that were started in fiscal year 2013. Our analysis is not generalizable and applies only to the selected programs in selected countries. To examine monitoring efforts in these countries, we worked with State and USAID to identify one program in each country based on the criterion that the program must have CWT-related activities. State officials reported that State had no CWT-funded programs in Tanzania, and USAID officials reported that USAID had no CWT-funded programs in South Africa. As a result, we assessed monitoring documentation for a total of four programs, which included award agreements and modifications, performance management plans, monitoring and evaluation plans, quarterly monitoring reports, and annual funding data. We identified widely accepted monitoring principles, determined commonalities among the principles, and considered the life-cycle of the project from planning to the utilization of monitoring information. Using these criteria, we identified nine elements and asked agencies for documentation that demonstrated that their monitoring practices reflected these elements. We reviewed the documentation the agencies provided for each program to determine whether it addressed each element— generally, partially, or not at all. For each program, an analyst was instructed to (1) answer whether an element was addressed by entering “yes,” “partial,” or “no”; and (2) summarize or cite relevant information or a source from the monitoring documents. A methodologist then reviewed the information and determined whether there was sufficient support to answer if an element was met “generally” (which was “yes”), “partially,” or “no.” In those instances where the analyst and methodologist interpreted the information differently, they met to discuss their differences and reach consensus. In instances when the initial documentation provided did not indicate that the agencies generally or partially met an element for a program, we informed the agencies and asked for any additional documentation that might be available. We cannot generalize from this sample of programs in these selected countries to the universe of all CWT programs in all countries. To examine evaluation efforts, we identified CWT-related programs in the three selected countries that had available evaluation reports. In total, we reviewed six programs, all of which are USAID supported. State was not able to identify any CWT-related programs with an available evaluation report in any of the three countries. We reviewed the evaluation reports available for each identified USAID program. To assess the degree to which these evaluations were conducted in adherence to select evaluation standards, we used criteria identified in prior GAO work. We then identified the goals and objectives of each evaluation report to determine the extent to which the evaluations addressed CWT goals. To address the extent to which the Task Force assesses its progress, we analyzed relevant documentation and information, including the National Strategy for Combating Wildlife Trafficking Implementation Plan and the 2015 Annual Progress Assessment. In addition, we reviewed documentation on results management and spoke with Task Force officials. Using prior GAO work, we established that a fundamental element in an organization’s efforts to manage for results is its ability to set performance goals with specific targets and time frames that reflect strategic goals and to measure progress toward its performance goals as part of its strategic planning efforts. In addition, according to Standards for Internal Control in the Federal Government, managers need to compare actual performance against planned or expected results and to analyze significant differences. Using these criteria, we analyzed the extent to which the Task Force assessed its progress. We conducted this performance audit from August 2015 to September 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Kimberly M. Gianopoulos, (202) 512-8612, or gianopoulosk@gao.gov. In addition to the individual named above, Judith Williams (Assistant Director), Marc Castellano, David Dayton, Martin De Alteriis, Mark Dowling, Shakira O’Neil, and Oziel Trevino made key contributions to this report. | Illegal trade in wildlife—wildlife trafficking—continues to push some protected and endangered animal species to the brink of extinction, according to the Department of State. Wildlife trafficking undermines conservation efforts, can fuel corruption, and destabilizes local communities that depend on wildlife for biodiversity and ecotourism revenues. This trade is estimated to be worth $7 billion to $23 billion annually. In 2013, President Obama issued an executive order that established the interagency Task Force charged with developing a strategy to guide U.S. efforts on this issue. GAO was asked to review U.S. government efforts to combat wildlife trafficking. This report focuses on wildlife trafficking in Africa, particularly of large animals, and examines, among other things, (1) what is known about the security implications of wildlife trafficking and its consequences, (2) actions Task Force agencies are taking to combat wildlife trafficking, and (3) the extent to which the Task Force assesses its progress. GAO analyzed agency documents and met with U.S. and host country officials in Washington, D.C.; Kenya; South Africa; and Tanzania. While criminal elements of all kinds, including some terrorist entities and rogue security personnel, engage in poaching and transporting ivory and rhino horn across Africa, transnational organized criminals are the driving force behind wildlife trafficking, according to reports GAO reviewed and agency officials GAO spoke with in the United States and Africa. Wildlife trafficking can contribute to instability and violence and harm people as well as animals. According to reports, about 1,000 rangers were killed from 2004 to 2014. Wildlife trafficking in Africa particularly affects large animals, with populations of elephants and rhinos diminishing at a rate that puts them at risk of extinction. Agencies of the interagency Task Force leading U.S. efforts to combat wildlife trafficking are taking a range of conservation and capacity-building actions. The Department of the Interior's Fish and Wildlife Service, for example, provides law enforcement assistance and supports global conservation efforts. The Department of State contributes to law enforcement capacity building and diplomatic efforts, while the Department of Justice prosecutes criminals and conducts legal training to improve partner-country capacity. Further, the U.S. Agency for International Development works to build community and national- level enforcement capacity and supports various approaches to combat wildlife trafficking. Several other agencies also contribute expertise or resources to support various activities outlined in the Task Force's National Strategy for Combating Wildlife Trafficking Implementation Plan . The Task Force provides some information about progress, but it lacks performance targets, making effectiveness difficult to determine at the strategic level. A fundamental element in an organization's efforts to manage for results is its ability to set specific targets that reflect strategic goals. Task Force officials identified a range of reasons why they do not have targets, including dependence on global partners, the long time periods needed to document results, and limited data availability. However, Task Force agencies have provided performance targets for other efforts that face similar challenges. Without targets, it is unclear whether the Task Force's performance is meeting expectations, making it difficult to gauge progress and to ensure that resources are being utilized most effectively in their efforts against wildlife trafficking. GAO recommends that the Secretaries of State and the Interior and the Attorney General of the United States, as co-chairs, jointly work with the Task Force to develop performance targets related to the National Strategy for Combating Wildlife Trafficking Implementation Plan . Agencies agreed with GAO's recommendation. |
VHA oversees VA’s health care system, which includes 153 medical facilities organized into 21 VISNs. VISNs are charged with the day-to-day management of the medical facilities within their network; however, VHA Central Office maintains responsibility for monitoring and overseeing both VISN and medical facility operations. These oversight functions are housed within several offices within VHA, including the Office of the Deputy Under Secretary for Health for Operations and Management and the Office of the Principal Deputy Under Secretary for Health. The 237 residential programs in place in 104 VA medical facilities provide residential rehabilitative and clinical care to veterans with a range of mental health conditions. VA operates three types of residential programs in selected medical facilities throughout its health care system: Residential rehabilitation treatment programs (RRTP). These programs provide intensive rehabilitation and treatment services for a range of mental health conditions in a 24 hours per day, 7 days a week structured residential environment at a VA medical facility. There are several types of RRTPs throughout VA’s health care system that specialize in offering programs for the treatment and management of certain mental health conditions—such as post-traumatic stress disorder (PTSD) and substance abuse. Domiciliary programs. In its domiciliaries, VA provides 24 hours per day, 7 days a week structured and supportive residential environments, housing, and clinical treatment to veterans. Domiciliary programs may also contain specialized treatment programs for certain mental health conditions. Compensated work therapy/transitional residence (CWT/TR) programs. These programs are the least intensive residential programs and provide veterans with community based housing and therapeutic work-based rehabilitation services designed to facilitate successful community reintegration. Security measures that must be in place at all three types of residential programs are governed by VHA’s Mental Health RRTP Handbook. Among the security precautions that must be in place for residential programs are secure accommodations for women veterans and periodic assessments of facility safety and security features. Most (111) of VA’s 153 medical facilities have at least one inpatient mental health unit that provides intensive treatment for patients with acute mental health needs. These units are generally a locked unit or floor within each medical facility, though the size of these units varies throughout VA. Care on these units is provided 24 hours per day, 7 days a week, and is intensive psychiatric treatment designed to stabilize veterans and transition them to less intensive levels of care, such as RRTPs and domiciliary programs. Inpatient mental health units are required to comply with VHA’s Mental Health Environment of Care Checklist that specifies several safety requirements for these units, including several security precautions, such as the use of panic alarm systems and the security of nursing stations within these units. The admissions processes for both VA residential programs and inpatient mental health units require several assessments that are conducted by an interdisciplinary team—including nursing staff, social workers, and psychologists. One of the commonly used assessments is a comprehensive biopsychosocial assessment. In residential programs, these assessments are required to be completed within 5 days of admission and include the collection of veterans’ medical, psychiatric, social, developmental, legal, and abuse histories along with other key information. These biopsychosocial assessments aid in the development of individualized treatment plans based on each veteran’s individual needs. For inpatient mental health units, initial screening of veterans, including the initial biopsychosocial assessment, often takes place outside the unit in another area of the medical facility where the veteran first presents for treatment, such as the emergency room or a mental health outpatient clinic. Veterans admitted to inpatient mental health units are typically reassessed more frequently than veterans admitted to residential programs due to their instability at the time of admission. VA’s OSLE is the department-level office within VA Central Office responsible for developing policies and procedures for VA’s law enforcement programs at local VA medical facilities. Most VA medical facilities have a cadre of VA police officers who are federal law enforcement officers who report to the medical facility’s director. These officers are charged with protecting the medical facility by responding to and investigating potentially criminal activities reported by staff, patients, and others within the medical facility and completing police reports about these investigations. VA medical facility police often notify and coordinate with other law enforcement entities, including local area police departments and the VA OIG, when criminal activities or potential security threats occur. The VA OIG has investigators throughout the nation who also conduct investigations of criminal activities affecting VA operations, including reported cases of sexual assault. By regulation, all potential felonies, including rape allegations, must be reported to VA OIG investigators. Once a case is reported, VA OIG investigators can either serve as the lead agency on the case or offer to serve as advisors to local VA police or other law enforcement agencies conducting an investigation of the issue. In April 2010, VA established an Integrated Operations Center (IOC) that serves as the department’s centralized location for integrated planning and data analysis on serious incidents. The VA IOC requires incidents— including sexual assaults—that are likely to result in media or congressional attention be reported to the IOC within 2 hours of the incident. The IOC then presents information on serious incidents to VA senior leadership officials, including the Secretary in some cases. VA has two concurrent reporting streams—a management stream and a law enforcement stream—for communicating sexual assaults and other safety incidents to senior leadership officials. The management stream identifies and documents incidents for leadership’s attention. The law enforcement stream documents incidents that may involve criminal acts for investigation and prosecution, when appropriate. We found that there were nearly 300 sexual assault incidents reported through the law enforcement stream to the VA police from January 2007 through July 2010—including alleged incidents that involved rape, inappropriate touching, forceful medical examinations, forced or inappropriate oral sex, and other types of sexual assault incidents. Finally, we could not systematically analyze sexual assault incident reports received through VA’s management stream due to the lack of a centralized VA management reporting system. Policies and processes are in place for documenting and communicating sexual assaults and other safety incidents to VHA management and VA law enforcement officials. VHA policies outline what information staff must report and define some mechanisms for this reporting, but medical facilities have the flexibility to customize and design their own site- specific reporting systems and policies that fit within the broad context of these requirements. VA’s structure for reporting sexual assaults and other safety incidents involves two concurrent reporting streams—the management stream and the law enforcement stream. This dual reporting process is intended to ensure that both relevant medical facility leadership and law enforcement officials are informed of incidents and can perform their own separate investigations. (See fig. 1 for an illustration of the reporting structure for sexual assaults and other safety incidents.) The reporting processes described below may vary slightly throughout VA medical facilities due to local medical facility policies and procedures. Management reporting stream. This stream—which includes reporting responsibilities at the local medical facility, VISN, and VHA Central Office levels—is intended to help ensure that incidents are identified and documented for leadership’s attention. Local VA medical facilities. Local incident reporting is the first step in communicating safety issues, including sexual assault incidents, to VISN and VHA Central Office officials and was handled through a variety of electronic facility based systems at the medical facilities we visited. The processes were similar in all five medical facilities we visited and were initiated by the first staff member who observed or was notified of an incident completing an incident report in the medical facility’s electronic reporting system. The medical facility’s quality manager then reviewed the electronic report, while the staff member was responsible for communicating the incident through his or her immediate supervisor or unit manager. VA medical facility leadership at the locations we visited reported that they are informed of incidents at morning meetings or through immediate communications, depending on the severity of the incident. Medical facility leadership officials are responsible for reporting serious incidents to the VISN. VISNs. Officials in network offices we reviewed told us that their medical facilities primarily report serious incidents to their offices through two mechanisms—issue briefs and “heads up” messages. Issue briefs document specific factual information and are forwarded from the medical facility to the VISN. Heads up messages are early notifications designed to allow medical facility and VISN leadership to provide a brief synopsis of the issue while facts are being gathered for documentation in an issue brief. VISN offices are typically responsible for direct reporting to the VHA Central Office. VHA Central Office. An official in the VHA Office of the Deputy Under Secretary for Health for Operations and Management said that VISNs typically report all serious incidents to this office. This office then communicates relevant incidents to other VHA offices, including the Office of the Principal Deputy Under Secretary for Health, through an e-mail distribution list. Law enforcement reporting stream. The purpose of this stream is to document incidents that may involve criminal acts so they can be investigated and prosecuted, if appropriate. The law enforcement reporting stream involves local VA police, VA’s OSLE, VA’s IOC, and the VA OIG. Local VA police. At the medical facilities we visited, local policies require medical facility staff to notify the medical facility’s VA police of incidents that may involve criminal acts, such as sexual assaults. According to VA officials, when VA police officers observe or are notified of an incident they are required to document the allegation in VA’s centralized police reporting system. VA’s OSLE. This office receives reports of incidents at VA medical facilities through its centralized police reporting system. Additionally, local VA police are required to immediately notify VA OSLE of serious incidents, including reports of rape and aggravated assaults. VA’s IOC. Serious incidents on VA property—those that result in serious bodily injury, including sexual assaults—are reported to the IOC either by local VA police or the VHA Office of the Deputy Under Secretary for Health for Operations and Management. Incidents reported to the IOC are communicated to the Secretary of VA through serious incident reports and to other senior staff through daily reports. VA OIG. Federal regulation requires that all potential felonies, including rape allegations, be reported to VA OIG investigators. In addition, VHA policy reiterates this requirement by specifying that the OIG must be notified of sexual assault incidents when the crime occurs on VA premises or is committed by VA employees. At the VA medical facilities we visited, officials told us that either the medical facility’s leadership team or VA police are responsible for reporting all incidents that are potential felonies to the VA OIG. The VA OIG may also learn of incidents from staff, patients, congressional communications, or the VA OIG hotline for reporting fraud, waste, and abuse. When the VA OIG is notified of a potential felony, their investigators document both their contact with medical facility officials or other sources and the initial case information they receive. We analyzed VA’s national police files from January 2007 through July 2010 and identified 284 sexual assault incidents reported to VA police during that period. These cases included incidents alleging rape, inappropriate touching, forceful medical examinations, oral sex, and other types of sexual assaults (see table 1). However, it is important to note that not all sexual assault incidents reported to VA police are substantiated. A case may remain unsubstantiated because an assault did not actually take place, the victim chose not to pursue the case, or there was insufficient evidence to substantiate the case. Due to our review of both open and closed VA police sexual assault incident investigations, we could not determine the final disposition of these incidents. In analyzing these 284 cases, we observed the following (see app. II for additional analysis of VA police reports): Overall, the sexual assault incidents described above included several types of alleged perpetrators, including employees, patients, visitors, outsiders not affiliated with VA, and persons of unknown affiliation. In the reports we analyzed, there were allegations of 89 patient-on-patient sexual assaults, 85 patient-on-employee sexual assaults, 46 employee-on-patient sexual assaults, 28 unknown affiliation-on-patient sexual assaults, and 15 employee-on-employee sexual assaults. Regarding gender of alleged perpetrators, we also observed that of the 89 patient-on-patient sexual assault incidents, 46 involved allegations of male perpetrators assaulting female patients, 42 involved allegations of male perpetrators assaulting male patients, and 1 involved an allegation of a female perpetrator assaulting a male patient. Of the 85 patient-on- employee sexual assault incidents, 83 involved allegations of male perpetrators assaulting female employees and 2 involved allegations of male perpetrators assaulting male employees. We could not systematically analyze sexual assault incidents reported through VA’s management stream due to the lack of a centralized VA management reporting system for tracking sexual assaults and other safety incidents. Despite the VA police receiving reports of nearly 300 sexual assault incidents since 2007, sexual assault incidents are underreported to officials within the management reporting stream and the VA OIG. Factors that may contribute to the underreporting of sexual assault incidents include the lack of both a clear definition of sexual assault and expectations on what incidents should be reported, as well as deficient VHA Central Office oversight of sexual assault incidents. Sexual assault incidents are underreported to both VHA officials at the VISN and VHA Central Office levels and the VA OIG. Specifically, VISN and VHA Central Office officials did not receive reports of all sexual assault incidents reported to VA police in VA medical facilities within the four VISNs we reviewed. In addition, the VA OIG did not receive reports of all sexual assault incidents that were potential felonies as required by VA regulation, specifically those involving rape allegations. VISNs and VHA Central Office leadership officials are not fully aware of many sexual assaults reported at VA medical facilities. For the four VISNs we spoke with, we reviewed all documented incidents reported to VA police from medical facilities within each network and compared these reports with the issue briefs received through the management reporting stream by VISN officials. Based on this analysis, we determined that VISN officials in these four networks were not informed of most sexual assault incidents that occurred within their network medical facilities. Moreover, we also found that one VISN did not report all of the cases they received to VHA Central Office (see table 2). To examine whether VA medical facilities were accurately reporting sexual assault incidents involving rape allegations to the VA OIG, we reviewed both the 67 rape allegations reported to the VA police from January 2007 through July 2010 and all investigation documentation provided by the VA OIG for the same period. We found no evidence that about two-thirds (42) of these rape allegations had been reported to the VA OIG. The remaining 25 had matching VA OIG investigation documentation, indicating that they were correctly reported to both the VA police and the VA OIG. By regulation, VA requires that: (1) all criminal matters involving felonies that occur in VA medical facilities be immediately referred to the VA OIG and (2) responsibility for the prompt referral of any possible criminal matters involving felonies lies with VA management officials when they are informed of such matters. This regulation includes rape in the list of felonies provided as examples and also requires VA medical facilities to report other sexual assault incidents that meet the criteria for felonies to the VA OIG. However, the regulation does not include criteria for how VA medical facilities and management officials should determine whether or not a criminal matter meets the felony reporting threshold. We found that all 67 of these rape allegations were potential felonies because if substantiated, sexual assault incidents involving rape fall within federal sexual offenses that are punishable by imprisonment of more than 1 year. In addition, we provided the VA OIG the opportunity to review summaries of the 42 rape allegations we could not confirm were reported to them by the VA police. To conduct this review, several VA OIG senior-level investigators determined whether or not each of these rape allegations should have been reported to them based on what a reasonable law enforcement officer would consider a felony. According to these investigators, a reasonable law enforcement officer would look for several elements to make this determination, including (1) an identifiable and reasonable suspect, (2) observations by a witness, (3) physical evidence, or (4) an allegation that appeared credible. These investigators based their determinations on their experience as federal law enforcement agents. Following their review, these investigators also found that several of these rape allegations were not appropriately reported to the VA OIG as required by federal regulation. Specifically, the VA OIG investigators reported that they would have expected approximately 33 percent of the 42 rape allegations to have been reported to them based on the incident summary containing information on these four elements. The investigators noted that they would not have expected approximately 55 percent of the 42 rape allegations to have been reported to them due to either the incident summary failing to contain these same four elements or the presence of inconsistent statements made by the alleged victims. For the approximately 12 percent remaining, the investigators noted that the need for notification was unclear because there was not enough information in the incident summary to make a determination about whether or not the rape allegation should have been reported to the VA OIG. There are several factors that may contribute to the underreporting of sexual assault incidents to VISNs, VHA Central Office, and the VA OIG— including VHA’s lack of a consistent sexual assault definition for reporting purposes; limited and unclear expectations for sexual assault incident reporting at the VHA Central Office, VISN, and VA medical facility levels; and deficiencies in VHA Central Office oversight of sexual assault incidents. VHA leadership officials may not receive reports of all sexual assault incidents that occur at VA medical facilities because VHA does not have a VHA-wide definition of sexual assault used for incident reporting. We found that VHA lacks a consistent definition for the reporting of sexual assaults through the management reporting stream at the medical facility, VISN, and VHA Central Office levels. At the medical facility level, we found that the medical facilities we visited had a variety of definitions of sexual assault targeted primarily to the assessment and management of victims of recent sexual assaults. Specifically, facilities varied in the level of detail provided by their policies, ranging from one facility that did not include a definition of sexual assault in its policy at all to another facility with a policy that included a detailed definition. (See table 3.) At the VISN level, VISN officials within the four networks we spoke with reported that they did not have definitions of sexual assault in VISN policies. However, some VISN officials stated they used other common definitions, including those from the National Center for Victims of Crime and The Joint Commission. Finally, while the VHA Central Office does have a policy for the clinical management of sexual assaults, this policy is targeted to the treatment of victims assaulted within 72 hours and does not include sexual assault incidents that occur outside of this time frame. In addition, neither this definition of sexual assault nor any other is included in VHA Central Office reporting guidance, which specifies the types of incidents that should be reported to VHA management officials. In addition to failing to provide a consistent definition of sexual assault for incident reporting, VHA also does not have clearly documented expectations about the types of sexual assault incidents that should be reported to officials at each level of the organization, which may also contribute to the underreporting of sexual assault incidents. Without clear expectations for incident reporting there is no assurance that all sexual assault incidents are appropriately reported to officials at the VHA Central Office, VISN, and local medical facility levels. We found that expectations were not always clearly documented, resulting in either the underreporting of some sexual assault incidents or communication breakdowns at all levels. VHA Central Office. An official from VHA’s Office of the Deputy Under Secretary for Health for Operations and Management told us that this office’s expectations for reporting sexual assault incidents were documented in its guidance for the submission of issue briefs. However, we found that this guidance does not specifically reference reporting requirements for any type of sexual assault incidents. As a result, VISNs we reviewed did not consistently report sexual assault incidents to VHA Central Office. For example, officials from one VISN reported sending VHA Central Office only 5 of the 10 issue briefs they received from medical facilities in their network, while officials from two other VISNs reported forwarding all issue briefs on sexual assault incidents they received. VISNs. The four VISNs we spoke with did not include detailed expectations regarding whether or not sexual assault incidents should be reported to them in their reporting guidance, potentially resulting in medical facilities failing to report some incidents. For example, officials from one VISN told us they expect to be informed of all sexual assault incidents occurring in medical facilities within their network, but this expectation was not explicitly documented in their policy. We found several reported allegations of sexual assault incidents in medical facilities in this VISN—including three allegations of rape and one allegation of inappropriate oral sex—that were not forwarded to VISN officials. When asked about these four allegations, VISN officials told us that they would only have expected to be notified of two of them—one allegation of rape and one allegation of inappropriate oral sex—because the medical facilities where they occurred contacted outside entities, including the VA OIG. VISN officials explained that the remaining two rape allegations were unsubstantiated and were not reported to their office; the VISN also noted that unsubstantiated incidents are not often reported to them. VA medical facilities. At the medical facility level, we also found that reporting expectations may be unclear. In particular, we identified cases in which the VA police had not been informed of incidents that were reported to medical facility staff. For example, we identified VA police files from one facility we visited where officers noted that the alleged perpetrator had been previously involved in other sexual assault incidents that were not reported to the VA police by medical facility staff. In these police files, officers noted that staff working in the alleged perpetrators’ units had not reported the previous incidents because they believed these behaviors were a manifestation of the veterans’ clinical conditions. We also observed cases of communication breakdowns during our discussions with medical facility officials and clinicians. For example, at one medical facility VA police reported that prior to our arrival they were not immediately informed of an alleged sexual assault incident involving two male patients in the dementia ward that occurred the previous evening. As a result, VA police were unable to immediately begin their investigation because staff from the unit had completed their shifts and left the ward. At another medical facility we visited, quality management staff identified five sexual assault incidents that had not been reported to VA police at the medical facility, despite these incidents being reported to their office. The VHA Central Office also had deficiencies in several necessary oversight elements that could contribute to the underreporting of sexual assault incidents to VHA management—including information-sharing practices and systems to monitor sexual assault incidents reported through the management reporting stream. Specifically, the VHA Central Office has limited information-sharing practices for distributing information about reported sexual assault incidents among VHA Central Office officials and has not instituted a centralized tracking mechanism for these incidents. Currently, the VHA Central Office relies primarily on e-mail messages to transfer information about sexual assault incidents among its offices and staff (see fig. 2). Under this system, the VHA Central Office is notified of sexual assault incidents through issue briefs submitted by VISNs via e-mail to one of three VISN support teams within the VHA Office of the Deputy Under Secretary for Health for Operations and Management. These issue briefs are then forwarded to the Director for Network Support within this office for review and follow-up with VA medical facilities if needed. Following review, the Director for Network Support forwards issue briefs to the Office of the Principal Deputy Under Secretary for Health for distribution to other VHA offices on a case-by-case basis, including the program offices responsible for residential programs and inpatient mental health units. Program offices are sometimes asked to follow up on incidents in their area of responsibility. We found that this system did not effectively communicate information about sexual assault incidents to the VHA Central Office officials who have programmatic responsibility for the locations in which these incidents occurred. For example, VHA program officials responsible for both residential programs and inpatient mental health units reported that they do not receive regular reports of sexual assault incidents that occur within their programs or units at VA medical facilities and were not aware of any incidents that had occurred in these programs or units. However, during our review of VA police files we identified at least 18 sexual assault incidents that occurred from January 2007 through July 2010 in the residential programs or inpatient mental health units of the five VA medical facilities we reviewed. If the management reporting stream were functioning properly, these program officials should have been notified of these incidents and any others that occurred in other VA medical facilities’ residential programs and inpatient mental health units. Without the regular exchange of information on sexual assault incidents that occur within their areas of programmatic responsibility, VHA program officials cannot effectively address the risks of such incidents in their programs and units and do not have the opportunity to identify ways to prevent incidents from occurring in the future. In early 2011, VHA leadership officials told us that initial efforts, including sharing information about sexual assault incidents with the Women Veterans Health Strategic Health Care Group and VHA program offices, were under way to improve how information on sexual assault incidents is communicated to program officials. However, these improvements have not been formalized within VHA or published in guidance or policies and are currently being performed on an informal ad hoc basis only, according to VHA officials. In addition to deficiencies in information sharing, we also identified deficiencies in the monitoring of sexual assault incidents within the VHA Central Office. VHA’s Office of the Deputy Under Secretary for Health for Operations and Management, the first VHA office to receive all issue briefs related to sexual assault incidents, does not currently have a system that allows VHA Central Office staff to systematically review or analyze reports of sexual assault incidents received from VA medical facilities through the management reporting stream. Specifically, we found that this office does not have a central database to store the issue briefs that it receives and instead relies on individual staff to save issue briefs submitted to them by e-mail to electronic folders for each VISN. In addition, officials within this office said they do not know the total number of issue briefs submitted for sexual assault incidents because they do not have access to all former staff members’ files. As a result of these issues, staff from the Office of the Deputy Under Secretary for Health for Operations and Management could not provide us with a complete set of issue briefs on sexual assault incidents that occurred in all VA medical facilities without first contacting VISN officials to resubmit these issue briefs. Such a limited archive system for reports of sexual assault incidents received through the management reporting stream results in VHA’s inability to track and trend sexual assault incidents over time. While VHA has, through its National Center for Patient Safety (NCPS), developed systems for routinely monitoring and tracking patient safety incidents that occur in VA medical facilities, these systems do not monitor sexual assaults and other safety incidents. Without a system to track and trend over time sexual assaults and other safety incidents, the VHA Central Office cannot identify and make changes to serious problems that jeopardize the safety of veterans in their medical facilities. VA does not have risk assessment tools specifically designed to examine sexual assault-related risks that some veterans may pose while they are being treated at VA medical facilities. Instead, VA clinicians working in the residential programs and inpatient mental health units at medical facilities we visited said they rely mainly on information about veterans’ legal histories, including a veteran’s history of violence, which are examined as part of a multidisciplinary admission assessment process to assess these and other risks veterans pose to themselves and others. Clinicians also reported that they generally rely on veterans’ self-reported information, though this information is not always complete or accurate. Finally, we found that VHA’s guidance on the collection of legal history information in residential programs and inpatient mental health units does not specify the type of legal history information that should be collected and documented. VHA officials and clinicians working in the residential programs and inpatient mental health units at medical facilities we visited told us that VHA does not have risk assessment tools specifically designed to examine sexual assault-related risks that some veterans may pose while being treated at VA medical facilities. However, these officials and clinicians noted that such risks are assessed and managed by clinical staff. VHA officials told us that since no evidence-based risk assessment tool for sexual assault and other types of violence exists, VHA relies on the professional judgment of clinicians to identify and manage risks through appropriate interventions. To do this, VA clinicians generally assess the overall risks veterans pose to themselves or others in the VA population by reviewing veterans’ medical records and conducting various interdisciplinary assessments. Specifically, clinicians said that they review medical records for information about veterans’ potential for violence and medical conditions. In addition, the interdisciplinary assessments clinicians are required to conduct include biopsychosocial assessments, nursing assessments, suicide risk assessments, and other program-specific assessments. In residential programs and inpatient mental health units, biopsychosocial assessments are a standard part of the admissions process and capture several types of information clinicians can use to assess risks veterans may pose. This information includes inquiries about veterans’ legal histories; any violence they may have experienced as either a victim or perpetrator, including physical or sexual abuse; childhood abuse and neglect; and military history and trauma. The examination of legal history information is an important part of clinicians’ assessments of sexual assault risks veterans may pose. Clinicians from all five medical facilities we visited explained that such legal history information is primarily obtained through veterans voluntarily self-reporting these issues during the biopsychosocial assessment process. Clinicians also cited other sources of information that could be used to learn about veterans’ legal issues, including family members, the court system, probation and parole officers, VHA justice outreach staff, and Internet searches of public registries containing criminal justice information. However, clinicians reported limitations in the use of several of these sources. In some cases, veterans must authorize the disclosure of their criminal or medical information before it can be released to a VA medical facility—although clinicians noted that veterans who have a legal restriction on where they may reside or need to meet probation or parole requirements while in treatment are often willing to release information. In addition, clinicians reported challenges in contacting veterans’ families to obtain information as many have no family support system, particularly those who are homeless prior to entering treatment. Further, VA’s Office of General Counsel and VHA Central Office officials told us that VHA staff cannot conduct background checks on veterans applying for VA health care services, including Internet searches of public sources of criminal justice information because VHA lacks legal authority to collect or maintain this information. VA clinicians from residential programs and inpatient mental health units at the five medical facilities we visited said that although they inquire about veterans’ past legal issues, they do not always obtain timely, complete, or reliable information on these issues from veterans. These clinicians noted that although many veterans are eventually forthcoming about their legal history, some may not disclose this information during the admission assessment or ongoing reassessment processes. For example, clinicians told us that sometimes they learned about particular legal issues, such as an arrest warrant or parole requirements, after veterans have been admitted to the program or when they were being discharged. They explained that sometimes veterans are uncomfortable discussing legal or sexual abuse issues during their admission interviews, but may share this information over time when they become comfortable with their treatment team. However, these clinicians noted that sometimes these issues do not come to light until veterans are beginning their transitions into community housing during the discharge process. Nevertheless, clinicians reported that they try to encourage veterans to disclose their full legal histories because it helps them to identify and address mental health problems that may have contributed to veterans’ encounters with the legal system and to aid the transition to independent community living. To determine whether legal history information in veterans’ medical records was complete, we reviewed the biopsychosocial assessments for seven veterans at our selected medical facilities who were registered sex offenders and found that while nearly all of these assessments documented that medical facility clinicians inquired about these veterans’ legal issues, these issues were not consistently included in the assessments. The extent to which information about legal history was documented for these seven veterans varied—from assessments containing detailed information about current and past criminal convictions, including the veterans’ sex offense violations and conviction dates, to assessments that did not contain any information about their past or current legal history. Specifically, four of these seven assessments contained detailed descriptions of the veterans’ legal histories including information on sex offense violations; two of these seven assessments contained limited descriptions of the veterans’ legal histories; and one of these seven assessments contained no information on the veteran’s legal history. In addition, we could not review one additional biopsychosocial assessment for an eighth veteran who was a patient in one of our selected medical facilities and was also listed in the publicly available state sex offender registry for the selected medical facility because the medical facility did not conduct a biopsychosocial assessment, as required by policy. Incomplete or missing information about veterans’ legal histories and histories of violence can hinder clinicians’ abilities to effectively assess risks, provide appropriate treatment options, and ensure the safety of all veterans. In particular, some clinicians noted that insufficient information about veterans’ legal backgrounds can affect their ability to make appropriate program residency placement decisions and assist veterans in developing appropriate housing and employment plans for their reintegration into the community. For example, clinicians reported they face challenges in assisting some homeless veterans in finding jobs or housing partly because outside entities often conduct background checks prior to accepting veterans into their programs and VA staff cannot always effectively help veterans navigate those issues if they lack relevant or timely information about veterans’ legal histories. Clinicians also said that knowledge about legal issues—such as pending court appearances, criminal charges, or sentencing requirements—is useful because such issues can interrupt or delay rehabilitation treatment services at VA or prevent veterans from using certain community resources when they are discharged if not adequately addressed. Finally, clinicians said that insufficient information about these issues affects their ability to identify actions to manage risks and make informed resource allocation decisions, such as increasing patient supervision, altering clinical staff assignments, or requesting VA police assistance. VHA’s assessment of veterans in their mental health programs for sexual assault-related risks is limited by a lack of specific guidance. Although VA clinicians are required to conduct comprehensive assessments that include the collection of veterans’ legal histories, VHA has limited guidance on how such information should be collected and documented in residential programs and inpatient mental health units. Residential programs. Current VHA policy for residential programs requires that information about veterans’ legal histories and current pending legal matters be included in biopsychosocial assessments, but does not specify the extent to which such information should be documented in veterans’ medical records or delineate sources that may be used to address this requirement. Specifically, this VHA policy does not include descriptions of the type of legal history information clinicians should document in the biopsychosocial assessment portion of veterans’ medical records. For example, there are no specific requirements for clinicians to document past incarcerations or convictions and dates when these events occurred. Currently, VHA delegates the responsibility for developing specific admission policies and procedures to the VA medical facility residential program managers, who may in turn delegate this responsibility to appropriate staff members. We found that medical facility level policies and procedures for the medical facilities we visited generally mirrored VHA’s broad guidance in this area, although some medical facilities had procedures that outlined the specific information that clinicians should collect related to veterans’ legal backgrounds—such as the type and date of convictions, description of pending legal charges or warrants, and time spent in jail or prison. Inpatient mental health units. VHA officials responsible for inpatient mental health units reported that broad VHA guidance requires inpatient mental health clinicians to conduct biopsychosocial assessments for patients admitted to these units. However, unlike residential programs, there is currently no VHA policy that specifically defines how inpatient mental health units should collect this legal history information. The broad guidance VHA officials cited, such as the VA/DOD Clinical Practice Guidelines for Post-Traumatic Stress and The Joint Commission standards, requires the collection of legal history information as part of the initial assessment, but does not fully specify the type of legal history information that must be included in veterans’ medical records. A VHA official responsible for inpatient mental health units throughout VA confirmed that guidance has not been issued regarding the legal history information that may or may not be collected by clinicians in inpatient mental health units or how information obtained from veterans should be documented. Without clear guidance on what legal history information should be collected and how this information should be documented in veterans’ medical records, there is no assurance that clinicians are comprehensively identifying and analyzing sexual assault-related risks or that legal history information is collected and documented consistently during biopsychosocial assessments. The residential programs and inpatient mental health units at the five VA medical facilities we visited reported using several types of patient- oriented and physical precautions to prevent safety incidents, such as sexual assaults, from occurring in their programs. Patient-oriented precautions included the use of flags on veterans’ electronic medical records to notify staff of individuals who may pose threats to the safety of others, and increased levels of observation for those veterans whom the clinicians believe may pose risks to others. Physical precautions in medical facilities we visited included monitoring precautions used to observe patients, security precautions used to physically secure facilities and alert staff of problems, and staff awareness and preparedness precautions used to educate staff about security issues and provide police assistance. However, at the facilities we visited, we found serious deficiencies in the use and implementation of certain physical security precautions, such as alarm system malfunctions and monitoring of security cameras. Staff from the residential programs and inpatient mental health units at the five VA medical facilities we visited reported using several types of patient-oriented precautions—techniques that focus on the patients themselves as opposed to the physical features of clinical areas—to prevent safety incidents from occurring in their programs. Generally, these precautions were not specifically geared toward preventing sexual assaults, but were used to prevent a broad range of safety incidents, including sexual assaults. We found that some precautions were used by staff in both residential programs and inpatient mental health units, while other precautions were specific to only one of these settings. Some of the patient-oriented precautions we noted during our site visits included the following: Using patient medical record flags. Staff in residential programs and inpatient mental health units reported that they can request that an electronic flag be placed on a veteran’s medical record when they have concerns about the individual’s behavior and reported that they use these flags to help inform their interactions with veterans. Relocating or separating veterans. Staff in residential programs and inpatient mental health units noted that they may move or separate patients who have the potential for conflict with other veterans to help prevent incidents from occurring. For example, at one medical facility we visited such relocations involved moving veterans that the clinical staff determine are safety risks to rooms closer to the nurses’ station where they can be monitored more closely. Staff from some of the medical facilities we visited reported that veterans who pose a threat to others may also be moved to areas where they have restricted contact with others in the unit. Setting expectations and using patient contracts. Residential program staff reported using several contract or patient education mechanisms to reinforce both what is expected of veterans in these programs and what behaviors are prohibited during their stay. For example, at one medical facility we visited veterans signed treatment agreements noting that actual violence, threats of violence, sexual harassment, and other actions were not permitted and could result in discharge from the program. At another medical facility we visited, patients signed a form agreeing to the program’s policy that any form of physical contact, such as grabbing, hugging, or kissing another person, was grounds for discharge from the program. Increasing direct patient observation. Staff in inpatient mental health units we visited reported using increased levels of direct patient observation to help prevent safety incidents. For example, two medical facilities we visited used graduated levels of observation for veterans who they felt posed safety risks or who were particularly vulnerable. These medical facilities included all women veterans on the unit in these more frequent staff check-ins to help ensure their safety and prevent incidents from occurring. In addition, staff from one inpatient mental health unit we visited placed a long-term mental health patient with a tendency of inappropriately touching staff and patients on permanent one-to-one observation status after several sexual assault incidents occurred. VA medical facilities we visited employed a variety of physical security precautions to prevent safety incidents in their residential programs and inpatient mental health units. Typically, medical facilities had discretion to implement these precautions based on the needs of their local medical facility within broad VA guidelines. As a result, the types of physical security precautions used in the five medical facilities we visited varied. In general, physical security precautions were used to prevent a broad range of safety incidents, including sexual assaults, but were not targeted toward the prevention of sexual assaults only. We classified these precautions into three broad categories: monitoring precautions, security precautions, and staff awareness and preparedness precautions (see table 4). Monitoring precautions—were those designed to observe and track patients and activities in residential and inpatient settings. For example, at some VA medical facilities we visited closed-circuit surveillance cameras were installed to allow VA staff to monitor areas and to help detect potentially threatening behavior or safety incidents as they occur. Cameras were also used to passively document any incidents that occurred. Staff in all the units we visited also conducted periodic rounds of the unit, which involved staff walking through the program areas to monitor patients and activities, either at regular intervals or on an as-needed basis. Security precautions—were those designed to maintain a secure environment for patients and staff within residential programs and inpatient mental health units and allow staff to call for help in case of any problems. For example, the units we visited regularly used locks and alarms at entrance and exit access points, as well as locks and alarms for some patient bedrooms. Another security precaution we observed was the use of stationary, computer-based, and portable personal panic alarms for staff. Finally, we observed that some of the programs we visited had established separate bedrooms, bathrooms, or other areas for women veterans, or had placed women veterans in designated locations within the units for security purposes. Staff awareness and preparedness precautions—were those designed to both educate residential program and inpatient mental health unit staff about, and prepare them to deal with, security issues and to provide police support and assistance when needed. For example, the medical facilities we visited regularly required training for staff on the prevention and management of disruptive behavior. Another preparedness precaution in use in some units was the establishment of a regular VA police presence through activities such as police conducting rounds or holding educational meetings with patients. Finally, all medical facilities we visited had a functioning police command and control center, which program staff could contact for police support when needed. We found that the VA medical facilities we visited implemented physical security precautions in a variety of ways. These precautions varied not only by medical facility, but also among residential and inpatient settings. Using broad VA guidelines, the medical facilities we visited generally determined which type of physical precautions would best meet the needs of their units and populations. As a result, we found that some precautions were used by all five medical facilities we visited, while others were in place in only some of these medical facilities. Inpatient mental health units. Physical security precautions in place at all five medical facilities we visited included the use of regular staff rounds to observe patients and clinical areas, locked unit entrances to prevent entry by unauthorized individuals, and stationary or computer-based panic alarm systems. Further, all units we visited used some combination of stationary or computer-based panic alarms, safety whistles staff could carry with them while on duty, and mandatory training on preventing and managing disruptive behavior. Some of these precautions used at all five medical facilities’ inpatient mental health units were implemented in different ways across those units. For example, while all inpatient mental health units used some type of panic alarm system, the specific system in use within each unit varied; some units used stationary panic alarm buttons fixed to walls or desks, while others used a computer-based system in which staff would press two keys simultaneously on their computers to trigger the alarm. The inpatient mental health units also varied with respect to where their stationary panic alarms sounded. At three medical facilities, the inpatient units’ stationary or computer-based panic alarms sounded at the medical facility’s police command and control center. At another medical facility, two types of panic alarms were used. The stationary panic alarms used by this facility’s inpatient mental health units sounded at both the police command and control center and on the inpatient unit itself to instantly alert unit staff members if a panic alarm was depressed, while the computer-based panic alarms used at the nursing stations sounded only at the police command and control center. Alarms in use at the fifth medical facility we visited sounded at the units’ nursing stations. Finally, while all five units had locked entrances, four of the units used physical keys to open the locks on the entrance doors, while the unit at the fifth medical facility used a keyless entry approach in which staff used their badges to electronically enter the units and relied on physical keys only if the keyless system was not functioning. Other precautions were present in only some of the inpatient mental health units we visited. For example, three medical facilities used closed- circuit surveillance cameras on their inpatient units to varying degrees. Cameras in place at one of these medical facilities could be monitored at the unit’s nursing station and were used to monitor the entrance doors, common areas, and seclusion rooms used for veterans who needed to be isolated from others. At another medical facility, cameras were used in a similar fashion, except that this unit did not use cameras to monitor veterans in seclusion rooms. Cameras in place at the remaining medical facility were part of a passive system that was not actively monitored by staff at the unit’s nursing station and was used only to record incidents at the entrance doors and common areas. One of these medical facilities also used alarms on bedroom doors that enunciated when the door was opened. These door alarms were installed on all bedrooms used by women and for other veterans on an as-needed basis. The ability to instantly alert staff of either unexpected entries or exits from these rooms could potentially minimize response time if an incident occurred. This latter medical facility also used a community policing approach, with one VA police officer dedicated to meeting regularly with inpatient mental health unit staff and patients to build relationships and help address any issues or concerns that arose. Residential programs. Physical security precautions in place at all five medical facilities’ non-CWT/TR residential programs included the use of regular staff rounds to observe patients, staff training on the prevention and management of disruptive behavior, the use of surveillance cameras to monitor program areas, and the placement of women veterans in designated areas of the residential facility. Some of these commonly used precautions were implemented in different ways across the five medical facilities. For example, some medical facilities placed women veterans in separate bedrooms located closest to the nursing stations, while others placed only women veterans in a separate wing of the facility. Medical facilities’ residential programs also varied with respect to where their closed-circuit camera feeds could be viewed. At four of the five medical facilities we visited, the camera feeds could be viewed by staff at the programs’ nursing stations or security desks, but at two medical facilities, cameras at the domiciliary could also be viewed by staff at VA police command and control centers. At all medical facilities, the camera systems were passive and not actively monitored by staff. Other precautions were used only in some of the five medical facilities’ non-CWT/TR residential programs. For example, residential programs in four of five medical facilities used stationary or computer-based panic alarms to alert others in case of emergency; the remaining medical facility did not use any form of stationary or computer-based panic alarm system. The four medical facilities’ stationary alarms varied with respect to where they sounded. In addition, only one medical facility we visited provided portable personal panic alarms with GPS capability to its residential program staff. In addition, VA police presence was widely used in two of the five medical facilities we visited. One of these medical facilities permanently staffed VA police officers at a residential program located off the medical facility’s main campus, while the other medical facility’s community policing officer met regularly with residential program staff and patients to facilitate more direct communications between the programs and VA police at the medical facility. CWT/TR residential programs. The three CWT/TR residential programs we visited used several types of physical security precautions. For example, two of the three CWT/TR programs we visited used closed- circuit surveillance cameras; one medical facility used surveillance cameras to record activity at entrances and exits, while another medical facility used surveillance cameras to record the parking lot areas. Neither of these locations actively monitored the camera feeds. In addition, one medical facility reported using regular rounds and conducting bed checks. Another medical facility had individual locks on bedroom doors; other sites did not. Only one of the three CWT/TR programs we visited accepted women; its apartment-style structure allowed women veterans to be placed in separate apartments. The other two CWT/TRs did not provide services for women veterans due to safety and privacy concerns stemming from their single-family home structures. During our review of the physical security precautions in use at the five VA medical facilities we visited, we observed seven weaknesses in three areas. These weaknesses included malfunctions in stationary and portable personal panic alarm systems, inadequate monitoring of security cameras, and insufficient staffing of police and security personnel (see table 5). Inadequate monitoring of closed-circuit surveillance cameras. We observed that VA staff in the police command and control center were not continuously monitoring closed-circuit surveillance cameras at all five VA medical facilities we visited. For example, at one medical facility, the system used by the residential programs at that medical facility cannot be monitored by the police command and control center staff because it is incompatible with systems installed in other parts of the medical facility. According to this medical facility’s VA police, the residential program staff did not consult with VA police before installing their own system. At another medical facility where staff in the police office monitor cameras covering the residential programs’ grounds and parking area, we found that the police office was unattended part of the time. In addition, at the remaining three medical facilities we visited, staff in the police command and control centers assigned to monitor medical facility surveillance cameras had other duties that prevented them from continuously monitoring the camera feeds. Specifically, they were also responsible for serving as telephone operators and police/emergency dispatchers for the entire VA medical facility. During our direct observations of their activities, we noted that they were not monitoring the camera feeds continuously. Although effective use of surveillance camera systems cannot necessarily prevent safety incidents from occurring, lapses in monitoring by security staff compromise the effectiveness of these systems in place to help prevent or lessen the severity of safety incidents. Alarm malfunctions. At least one form of alarm failed to work properly when tested at four of the five medical facilities we visited. For example, at one medical facility, we tested the portable personal panic alarms used by residential program staff and found that the police command and control center could not always properly pinpoint the location of the tester when an alarm was activated. When we tested this alarm inside a building at this campus it functioned properly; however, when we tested it outside, the location identified as the site of the alarm was at least 100 feet away from the location where we set off the alarm. Further, when we tested an emergency call box located outside the entrance to the residential program buildings at this same medical facility, the call went to a central telephone operator at the VA medical facility switchboard—not the VA police command and control center—and the system improperly identified our tester as calling from an elevator rather than from our location outside the residential program building. At another medical facility that used stationary panic alarms in inpatient mental health units, residential programs, and other clinical settings (i.e., staff offices, nursing stations, and common rooms), almost 20 percent of these alarms throughout the medical facility were inoperable. Many of the inoperable alarms were due to ongoing construction of new units at the medical facility, but some of the remaining inoperable alarms were located in other parts of the medical facility still in use. It is unclear if staff in these other areas were aware that these alarms were inoperable and could not be used to call for help if they needed it. At an inpatient mental health unit in a third medical facility, our tests of the computer-based panic alarm system detected multiple alarm failures. Specifically, three of the alarms we tested failed to properly pinpoint the location of our tester because the medical facility’s computers had been moved to different locations and were not properly reconfigured. Finally, at a fourth medical facility, alarms we tested in the inpatient mental health unit sounded properly, but staff in the unit and VA police responsible for testing these alarms did not know how to turn them off after they were activated. In each of the cases where alarms malfunctioned, VA staff were not aware the alarms were not functioning properly until we informed them. Deficiencies like these at VA medical facilities could lead to delayed response times and seriously erode efforts to prevent or mitigate sexual assaults and other safety incidents. Inadequate documentation or review of alarm system testing. We found that one of the five sites we visited failed to properly document tests conducted of their alarm systems for their residential programs, although testing of alarms is a required element in VA’s Environment of Care Checklist. Testing of alarm systems is important to ensure that systems function properly, and not having complete documentation of alarm system testing is an indication that periodic testing may not be occurring. In addition, three medical facilities reported using computer-based panic alarms that are designed to be self-monitoring to identify cases where computers equipped with the system fail to connect with the servers monitoring the alarms. All three of these medical facilities stated that due to the self-monitoring nature of these alarms, they did not maintain alarm test logs of these systems. However, we found that at two of these three medical facilities these alarms failed to properly alert VA police when tested. Such alarm system failures indicate that the self-monitoring systems may not be effectively alerting medical facility staff of alarm malfunctions when they occur, indicating the need for these systems to be periodically tested by VA police. Alarms failed to alert both police and unit staff. In inpatient mental health units at all five medical facilities we visited, stationary and computer-based panic alarm systems we tested did not alert staff in both the VA police command and control center and the inpatient mental health unit where the alarm was triggered. Alerting both locations is important to better ensure that timely and proper assistance is provided. At four of these medical facilities, the inpatient mental health units’ stationary or computer-based panic alarms notified the police command and control centers but not staff at the nursing stations of the units where the alarms originated. Had these alarms been used in real emergencies, response times may have been delayed because staff in the police command and control center would have had to inform the inpatient mental health unit that an alarm had been activated by someone within their unit. At the fifth medical facility, the stationary panic alarms only notified staff in the unit nursing station, making it necessary to separately notify the VA police. Finally, none of the stationary or computer-based panic alarms used by residential programs notified both the police command and control centers and staff within the residential program buildings when tested. Limited use of portable personal panic alarms. Electronic portable personal panic alarms were not available for the staff at any of the inpatient mental health units we visited and were available to staff at only one residential program we reviewed. In two of the inpatient mental health units we visited, staff were given safety whistles they could use to signal others in cases of emergency, personal distress, or concern about veteran or staff safety. However, relying on whistles to signal such incidents may not be effective, especially when staff members are the victims of assault. For example, a nurse at one medical facility we visited was involved in an incident in which a patient grabbed her by the throat and she was unable to use her whistle to summon assistance. Some inpatient mental health unit staff we spoke with indicated an interest in having portable personal panic alarms to better protect them in situations like these. VA police staffing and workload challenges. At most medical facilities we visited, VA police forces and police command and control centers were understaffed, according to medical facility officials. For example, during our visit to one medical facility, VA police officials reported being able to staff just two officers per 12-hour shift to patrol and respond to incidents at both the medical facility and at a nearby 675-acre veteran’s cemetery. While this staffing ratio met the minimum standards for VA police staffing, having only two police officers to cover such a large area could potentially increase the response times should a panic alarm activate or other security incident occur on medical facility grounds. Also, we found that there was an inadequate number of officers and staff at this medical facility to effectively police the medical facility and maintain a productive police force. The medical facility had a total of nine police officers at the time of our visit; according to VA staffing guidance, the minimum staffing level for this medical center should have been 19 officers. Similarly, at another medical facility, the police force was short 14 active police officers because some officers either were on military leave or awaiting the completion of pending background checks. During our visit to this medical facility, we also noted a shortage of officers at one of the medical facility’s police offices responsible for the inpatient mental health units. Because of this, there were periods of time when this police office was unattended. Not all medical facilities we visited had staffing problems. At one medical facility, the VA police appeared to be well staffed and were even able to designate staff to monitor off-site residential programs and community based outpatient clinics. Lack of stakeholder involvement in unit redesign. As medical facilities undergo remodeling, it is important that stakeholders are consulted in the design process to better ensure that new or remodeled areas are both functional and safe. Involving the VA police, security specialists, computer experts, and staff in the affected units would better ensure that proper security precautions are built into redesign projects. We found that such stakeholder involvement on remodeling projects had not occurred at one of the medical facilities we visited. At this medical facility, some clinicians said that a lack of stakeholder involvement in the redesign of the inpatient mental health units had created several safety concerns and that postconstruction changes had to be made to the unit to ensure the safety of veterans and unit staff. Specifically, clinical and VA police personnel were not consulted about a redesign project for the inpatient mental health unit. The new unit initially included one nursing station that did not prevent patient access if necessary. After the unit was reopened following the renovation, there were a number of assaults, including an incident where a veteran reached over the counter of the unit’s nursing station and physically assaulted a nurse by stabbing her in the neck, shoulder, and leg with a pen. Had staff been consulted on the redesign of this unit, their experience managing veterans in an inpatient mental health unit environment would have been helpful in developing several safety aspects of this new unit, including the design of the nursing station. Less than a year after opening this unit, medical facility leadership called for a review of the units’ design following several reported incidents. As a result of this review, the unit was split into two separate units with different veteran populations, an additional nursing station was installed, and changes were planned for the structure of both the original and newly created nursing stations—including the installation of a new shoulder-height plexiglass barricade on both nursing station counters. VA management has not remedied problems relating to the reporting of sexual assault incidents, the assessment of sexual assault-related risks, and the precautions used to prevent sexual assaults and other safety incidents in VA medical facilities. This has led to a disorganized incident reporting structure and has left VA vulnerable to the continued occurrence of such incidents and unable to take systematic action on needed improvements to prevent future incidents in all VA medical facilities. To mitigate the occurrence of sexual assaults and other safety incidents in its medical facilities and better ensure the safety of both veterans and staff, VA needs to address several areas—including the processes for reporting sexual assault incidents, the underreporting of sexual assault incidents, the assessment of risks certain veterans may pose to the safety of others, and the implementation of physical security precautions. Failure to act decisively in all of these areas would likely continue to place veterans and medical facility staff in some locations in harm’s way. To begin addressing these concerns, VA must ensure that both management and law enforcement officials are aware of the volume and specific types of sexual assault incidents that are reported through the law enforcement stream. Such awareness would help both management and law enforcement officials address safety concerns that emerge for both patients and staff throughout VA’s health care system. Medical facility staff remain uncertain about what types of incidents should be reported to VHA leadership and VA law enforcement officials, and prevention and remediation efforts are eroded by failing to tap the expertise of these officials. These officials can offer valuable suggestions for preventing and mitigating future sexual assault incidents and help address broader safety concerns through systemwide improvements throughout the VA healthcare system. Leaving reporting decisions to local VA medical facilities—rather than allowing VHA management and VA OIG officials to determine what types of incidents should be reported based on the consistent application of known criteria—increases the risk that some sexual assault incidents may go unreported. Moreover, uncertainty about sexual assault incident reporting is compounded by VA not having: (1) established a consistent definition of sexual assault, (2) set clear expectations for the types of sexual assault incidents that should be reported to VISN and VHA Central Office leadership officials, and (3) maintained proper oversight of sexual assault incidents that occurred in VA medical facilities. Unless these three key features are in place, VHA will not be able to ensure that all sexual assault incidents will be consistently reported throughout the VA health care system. Specifically, the absence of a centralized tracking system to monitor sexual assault incidents across VA medical facilities may seriously limit efforts to both prevent such incidents in the short and long term and maintain a working knowledge of past incidents and efforts to address them when staff transitions occur. Maintaining veterans’ access to care is a priority in VA, but in those cases where veterans have a history of sexual assault or other violent acts, VA must be vigilant in identifying the risks that such veterans may pose to the safety of others at its medical facilities. Risk assessment tools can be valuable mechanisms for identifying those veterans that pose risks to others while being treated at VA medical facilities. However, VA does not currently have a risk assessment tool specific to sexual assault and instead relies on clinicians’ professional judgments. These judgments are largely informed by the assessment of veterans’ legal histories, which depend heavily on self-reported data that must be accurately documented by clinicians in veterans’ medical records. Moreover, current VA guidance is not specific about the extent to which current and past legal issues—such as the type or date of convictions—should be documented in veterans’ medical records—a factor that further complicates the ability of VA clinicians both to compile complete legal histories on veterans and to make informed decisions about risks certain veterans may pose to other veterans and VA staff. Ensuring that medical facilities maintain a safe and secure environment for veterans and staff in residential programs and inpatient mental health units is critical and requires commitment from all levels of VA. Currently, the five VA medical facilities we visited are not adequately monitoring surveillance camera systems, maintaining the integrity of alarm systems, and ensuring an adequate police presence. Closer oversight by both VISNs and VA and VHA Central Office staff is needed to provide a safe and secure environment throughout all VA medical facilities. To improve VA’s reporting and monitoring of allegations of sexual assault, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following four actions: Ensure that a consistent definition of sexual assault is used for reporting purposes by all medical facilities throughout the system to ensure that consistent information on these incidents is reported from medical facilities through VISNs to VHA Central Office leadership. Clarify expectations about what information related to sexual assault incidents should be reported to and communicated within VISN and VHA Central Office leadership teams, such as officials responsible for residential programs and inpatient mental health units. Implement a centralized tracking mechanism that would allow sexual assault incidents to be consistently monitored by VHA Central Office staff. Develop an automated mechanism within the centralized VA police reporting system that signals VA police officers to refer cases involving potential felonies, such as rape allegations, to the VA OIG to facilitate increased communication and partnership between these two entities. To help identify risks and address vulnerabilities in physical security precautions at VA medical facilities, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following four actions: Establish guidance specifying what should be included in legal history discussions with veterans and how this information should be documented in veterans’ biopsychosocial assessments. Ensure medical centers determine whether existing stationary, computer- based, and portable personal panic alarm systems operate effectively through mandatory regular testing. Ensure that alarm systems effectively notify relevant staff in both medical facilities’ VA police command and control centers and unit nursing stations. Require relevant medical center stakeholders to coordinate and consult on (1) plans for new and renovated units, and (2) any changes to physical security features, such as closed-circuit television cameras. VA provided written comments on a draft of this report, which we have reprinted in appendix III. In its comments, VA generally agreed with our conclusions, concurred with our recommendations, and described the agency’s plans to implement each of our recommendations. VA also provided technical comments which we have incorporated as appropriate. Specifically, VA outlined its plan to create a multidisciplinary workgroup that will undertake efforts to respond to seven of our eight recommendations—including developing definitions of sexual assault and other safety incidents, reviewing existing data sources and communication mechanisms, developing a centralized mechanism for monitoring sexual assaults and other safety incidents, and developing risk assessment and management guidance. The workgroup will be co-chaired by the Acting Assistant Deputy Under Secretary for Health for Clinical Operations and the Chief Consultant for the Women Veterans Health Strategic Health Care Group. Participants will include representatives from VA field operations and the following offices: (1) the VHA Deputy Under Secretary for Health for Operations and Management; (2) the VHA Deputy Under Secretary for Health for Policy and Services; (3) the VHA Principal Deputy Under Secretary for Health; (4) the VA Office of Security and Law Enforcement; and (5) other offices as needed, including the VA Office of General Counsel. As outlined by VA, the workgroup will review current data sources, the organization and structure of VHA’s methods for reporting sexual assaults and other safety incidents, and the agency’s current response to sexual assault incidents. In addition, the workgroup will review and evaluate risks and efforts to prevent sexual assaults. Finally, the workgroup will assess the status of current policies within VHA and address which organizational initiatives and policies should be updated. According to VA’s comments, the workgroup will provide the Under Secretary for Health and his Deputies with monthly verbal updates on its progress, as well as an initial action plan by July 15, 2011 and a final report by September 30, 2011. In addition, VA stated in its comments that the Office of the Deputy Under Secretary for Health for Operations and Management will work in conjunction with this multidisciplinary workgroup on a number of initiatives to address panic alarm system testing and coordination on renovation and construction at VA medical facilities. Initiatives described in VA’s comments specifically included efforts to: (1) re-emphasize the need for routine testing of panic alarm systems; (2) examine existing VHA policy to determine if revisions are needed to ensure that regular testing of alarm systems is required and preventative maintenance is performed on these systems; (3) re-emphasize the importance of coordination at the local level to ensure that safety and security are considered during construction and renovation processes at local levels; and (4) determine how such coordination can be formalized as part of the planning and design processes for all construction processes in conjunction with the VA Office of Construction. Finally, to address our remaining recommendation, the VA OSLE will develop a mechanism that will directly prompt VA police officers to report potential felonies, including rape, to the VA OIG when these offenses are recorded in the centralized police reporting system. In its comments, VA stated that this system will also send a message to a specialized mailbox alerting VA OIG investigators that a potential felony has been recorded in the centralized police reporting system. We are sending copies of this report to the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This appendix describes the information and methods we used to examine: (1) VA’s processes for reporting sexual assault incidents and the volume of these incidents reported in recent years; (2) the extent to which sexual assault incidents are fully reported and what factors may contribute to any observed underreporting; (3) how medical facility staff determine sexual assault-related risks veterans may pose in residential and inpatient mental health settings; and (4) the precautions in place in residential and inpatient mental health settings to prevent sexual assaults and other safety incidents and any weaknesses in these precautions. Specifically, we discuss our methods for selecting VA medical facilities for site visits; identifying appropriate Department of Veterans Affairs (VA) and Veterans Health Administration (VHA) Central Office officials to interview; assessing the extent to which sexual assault incidents are fully reported; determining what legal history information is captured in veterans’ medical records; and examining the physical security precautions in use in selected residential programs and inpatient mental health units. In addition to the methods described below, we also reviewed relevant VA and VHA policies, handbooks, directives, and other guidance documents to inform our overall review of these issues whenever possible. We conducted five site visits to VA medical facilities to obtain the perspectives of medical facility level officials and clinicians working in residential programs and inpatient mental health units and to observe the types of physical security precautions used within these medical facilities. To identify VA medical facilities for our site visits, we examined available VA and medical facility level information to ensure our sample included medical facilities with the following characteristics: Presence of both residential programs and inpatient mental health units. We identified medical facilities that had both types of programs by consulting VA documentation of residential program and inpatient mental health units. Presence of a variety of residential program specialties. We identified medical facilities that had: (1) at least one residential program—including domiciliaries and residential rehabilitation treatment programs (RRTP)— and (2) had a compensated work therapy/transitional residence (CWT/TR) program wherever possible. In addition, we selected medical facilities that had a variety of RRTP program specialties designed to treat particular mental health issues, such as post-traumatic stress disorder (PTSD) and substance abuse. Various levels of experience reporting sexual assault incidents. Using sexual assault case files provided by the VA Office of Inspector General (OIG) Office of Investigations—Criminal Investigations Division—we identified VA medical facilities with a wide variety of experiences reporting sexual assault incidents, including one medical facility with no reported sexual assault incidents and several others that had reported a number of sexual assault incidents that occurred within their residential programs or inpatient mental health programs. This ensured that the VA medical facilities we visited captured a range of perspectives on the reporting of sexual assault incidents. Various medical facility sizes. We identified medical facilities with different campus sizes and types of on-site programs by determining whether each medical facility was a single or multisite medical facility and considering several other aspects of medical facility design, such as the presence of on-site day care centers. Using these criteria, we judgmentally selected five VA medical facilities to visit during our field work. During our site visits to these locations, we interviewed each medical facility’s leadership team; residential program and inpatient mental health unit managers and staff; VA police; quality and patient safety managers; disruptive behavior committee members; woman veterans program manager; military sexual trauma program coordinator; and veterans justice outreach program coordinator. We spoke with these officials about a variety of topics, including incident reporting, risk assessment practices, and precautions used to prevent safety incidents, including sexual assaults. In addition, we spoke with officials from the four Veterans Integrated Service Networks (VISN) responsible for managing these medical facilities to discuss their expectations, policies, and procedures for reporting sexual assault incidents. We also spoke with each VISN’s Health Care for Re-entry Veterans program managers to gain additional insight on these programs. Information obtained from our visits to selected VA medical facilities and interviews with selected VISNs cannot be generalized to all VISNs and VA medical facilities throughout the nation. We also interviewed VA and VHA Central Office officials responsible for incident reporting; law enforcement oversight; mental health programs; women veterans; risk assessment; patient privacy; and legal issues. We spoke with the following offices at the department level within VA: (1) Office of Security and Law Enforcement (OSLE); (2) the Integrated Operations Center (IOC); (3) the Office of General Counsel; and (4) the OIG’s Office of Investigations—Criminal Investigations Division. We also interviewed officials from the following offices within VHA Central Office: (1) the Office of the Deputy Under Secretary for Health for Operations and Management; (2) the Office of the Principal Deputy Under Secretary for Health; (3) the Office of Mental Health Services; (4) the Women Veterans Health Strategic Health Care Group; and (5) the Information Access and Privacy Office. To assess the effectiveness of the reporting of sexual assault incidents, we reviewed documentation of sexual assault incidents from VHA management officials and VA law enforcement entities. To analyze the reporting process for sexual assault incidents, we requested documentation of these incidents from our selected VISNs; VHA’s Office of the Deputy Under Secretary for Health for Operations and Management; VA OSLE; and VA OIG. For all information we requested, we asked VHA or VA officials to send us either issue briefs or investigation documentation that fell within the definition of sexual assault used for the purposes of this report. To review reports submitted through VHA’s management reporting stream, we requested copies of issue briefs on sexual assault incidents sent to our selected VISNs and the VHA Office of the Deputy Under Secretary for Health for Operations and Management. We also asked our selected VISNs to identify which of these issue briefs were sent to the VHA Central Office for further review. The four VISNs responded that in total they received 16 issue briefs and forwarded 11 of these documents to the VHA Central Office. Due to limitations in how information is archived within VHA’s Office of the Deputy Under Secretary for Health for Operations and Management, we could not determine how many issue briefs this office received through the management reporting stream across all VA medical facilities. To review reports submitted through VA’s law enforcement reporting stream, we requested documentation of sexual assault incidents reported to the VA police through the VA OSLE and documentation of incidents referred to the VA OIG for investigation. From the VA OSLE, we requested and received police files submitted by any VA medical facility related to sexual assault incidents that occurred since January 2005. We then limited the police files we reviewed to only those incidents that occurred between January 2007 and July 2010 due to a records schedule that requires the VA police to destroy files greater than 3 years old. As a result of this requirement, our review of sexual assaults reported to the VA police during 2007 was limited to only those cases retained by VA police. Additionally, due to the lack of a centralized VA police reporting system prior to fiscal year 2009, VA medical facility police manually transmitted all reports to the VA OSLE for inclusion in our review, which resulted in only those reports received by VA OSLE being included in our analysis. We received a total of 520 VA police case files for the period January 2007 through July 2010, including both open and closed investigations, from the VA OSLE. In addition, we requested copies of VA OIG investigation documentation of sexual assault incidents that occurred in all VA medical facilities from January 2005 through July 2010. However, we limited our review of VA OIG investigation documentation to only those incidents that occurred between January 2007 and July 2010 to ensure our review of VA police cases and VA OIG investigations were concurrent. We received investigation documentation on 106 closed sexual assault incidents that occurred during this time frame from the VA OIG. Additionally, the VA OIG reported that there were 9 incidents that were currently under investigation at the time of our review and we did not require them to provide documentation on these cases due to the sensitive nature of these ongoing investigations. To determine whether each of the incidents provided by the VA police and the VA OIG should be included in our analysis of sexual assault incidents that occurred in VA medical facilities between January 2007 and July 2010, we reviewed whether each incident received from the VA police and the VA OIG met the definition of sexual assault used for this engagement. To complete this assessment, two analysts worked independently to make an initial determination on whether each incident met this definition and a third analyst reviewed these initial judgments to arbitrate a final decision using predetermined decision rules. Of the 520 documents received from the VA police during the specified time frame, 284 incidents were included in our analysis, 222 were determined to be out of the scope of our review, and the remaining 14 did not have enough information in the police files to determine whether or not these cases fell within the scope of our review. This process was repeated for the 106 VA OIG investigation documents for closed investigations we received and 96 were included in our analysis, 7 were determined to be outside the scope of our review, and the remaining 3 did not contain enough information to determine whether or not they fell within the scope of our review. Our analyses of sexual assault incidents reported to the VA police and the VA OIG was limited to only those incidents that were reported and cannot be used to project the volume of sexual assault incident reports that may occur in future years. Following verification that police and VA OIG incidents met our definition of sexual assault and comparisons of the two entities’ reported sexual assault incidents, we found data derived from these reports to be sufficiently reliable for our purposes. For our analysis of the 284 incidents reported to the VA police determined to be within the scope of our review, we identified several key data points in each case file, including the gender of the perpetrator and victim, the relationship the perpetrator and victim had to VA, and the medical facility location and VISN where the incident originated. In addition, we also placed these incidents into one of five categories to analyze the volume of several types of sexual assault incidents that occurred throughout VA medical facilities. Inappropriate touch—included any case involving only allegations of touching, fondling, grabbing, brushing, kissing, rubbing, or other like- terms. Forced or inappropriate oral sex—included any case involving only allegations of forced or inappropriate oral sex. Forceful examination—included any case alleging only a medical examination that was painful, uncomfortable, or seemingly inappropriate to the patient. Rape—included any case involving rape allegations, which we defined as vaginal or anal penetration by any body part or object without consent. We deemed a file as containing a rape allegation if any of the following were noted within the file: (1) either the victim or VA staff used the term rape in their descriptions of the incident; (2) a rape kit was requested or administered; (3) allegations that sex occurred without consent, whether or not penetration was described; or (4) allegations of attempted vaginal or anal penetration without consent. In addition, cases where VA staff deemed that one or more of the victims involved were mentally incapable of giving consent for sexual activities or that a victim’s ability to consent was otherwise impaired, were included in this category. Other—included any case that did not fit into the categories described above or if the incident described in the police file was unclear. In addition, cases involving consensual sexual activities between two individuals who were in a mental health or geriatric unit where both parties were found to be capable of giving consent were included in this category. To examine the discrepancies between the number of sexual assault incidents reported to VA police and the number referred to the VA OIG, we reviewed the 67 rape allegations that were reported to VA police to determine which of these reports were referred to the VA OIG. We selected rape allegations for this additional review due to the severity of these allegations and the likelihood they would be considered potential felonies that must be reported to the VA OIG. To complete this analysis, we matched the VA police files containing rape allegations to a VA OIG investigation document wherever possible. A police file and VA OIG investigation document were considered a match when both documents discussed the same incident details—including information such as discussion of the same perpetrator and victim, medical facility, and incident date. Of the 67 rape allegations reported to the VA police, 25 had a matching VA OIG investigation document, while the remaining 42 did not. In addition, we reviewed federal statutes related to sexual offenses and sentencing classification for felonies to verify that all rape allegations included in our review met the statutory criteria for felonies under federal law. Finally, investigators from the VA OIG reviewed summaries of the 42 rape allegations that did not match VA OIG investigation documentation previously provided to determine whether or not they would have expected such cases to be reported to their office. These case summaries did not contain identifying information about the suspects, victims, or VA medical facilities involved in these incidents. Four VA OIG investigators reviewed these summaries and based their determinations on several key factors developed from their experience as law enforcement officers. We reviewed the biopsychosocial assessment sections of selected veterans’ medical records to better understand how legal history information contained in these documents could be used to inform clinicians’ assessments of sexual assault-related risks veterans may pose while they are being treated at VA medical facilities. We reviewed these assessments for all veterans who were registered sex offenders residing in the residential programs or inpatient mental health units of our selected medical facilities. To determine if registered sex offenders were residing at the medical facilities we visited, we searched the Web sites of each medical facility’s corresponding publicly available state sex offender registry and included any individual registered under the address of the selected medical facility’s residential programs or inpatient mental health units in our sample. The addresses used for these searches were provided by each medical facility. Our corresponding sample included eight veterans from three of the five medical facilities we visited. VA medical facility staff provided biopsychosocial assessments for seven of these veterans and noted that the eighth assessment was never completed by the medical facility. We analyzed the contents of these seven veterans’ biopsychosocial assessments to determine the extent to which these records contained information about these veterans’ current and past legal issues, including documentation of convictions and parole or probation status. We also reviewed information contained in these assessments regarding these veterans’ histories of sexual abuse. Our review of veterans’ biopsychosocial assessments was limited to only those veterans meeting these criteria and cannot be generalized to broader VA patient populations. To examine the physical security precautions in place in residential programs and inpatient mental health units, physical security experts from our Forensic Audits and Investigative Services team conducted an independent assessment of physical security measures in place at the medical facilities we visited. To conduct this assessment, these experts assessed the physical security precautions in place at each of the five medical facilities we visited and identified any weaknesses they observed in these systems using criteria based on generally recognized security standards and selected VA security requirements. These reviews included the testing of some physical security precautions, such as panic alarm systems, and interviews with staff working in the residential programs and inpatient mental health units that were reviewed. Our review of these precautions was limited to only those medical facilities we reviewed and does not represent results from all VA medical facilities nationwide. We conducted our performance audit from May 2010 through June 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted our related investigative work in accordance with standards prescribed by the Council of Inspectors General on Integrity and Efficiency. This appendix provides additional results from our analysis of VA police reports of sexual assault incidents from January 2007 through July 2010. Cases not reported to the VA police are not included in our analysis of sexual assault incidents. Figure 3 shows the number of sexual assault incidents reported at VA medical facilities to VA police by Veterans Integrated Service Network (VISN) from January 2007 through July 2010. This count ranged from 34 incidents reported in VISNs C and D to no incidents reported in VISN E. Table 6 shows the total number of sexual assault incidents alleging rape by gender of the perpetrator and victim from January 2007 through July 2010. Table 7 shows the total number of sexual assault incidents alleging rape by the perpetrator and victim relationship to VA from January 2007 through July 2010. Table 8 shows the total number of patient-on-patient assault incidents and patient-on-employee assault incidents by the type of sexual assault incident from January 2007 through July 2010. In addition to the contact named above, Marcia A. Mann, Assistant Director; Gary A. Bianchi; Robin Burke; Emily Goodman; Katherine Nicole Laubacher; Lisa Motley; Andy O’Connell; George Ogilvie; Carmen Rivera-Lowitt; and Cassandra Yarbrough made key contributions to this report. | Changes in patient demographics present unique challenges for VA in providing safe environments for all veterans treated in Department of Veterans Affairs (VA) facilities. GAO was asked to examine whether or not sexual assault incidents are fully reported and what factors may contribute to any observed underreporting, how facility staff determine sexual assault-related risks veterans may pose in residential and inpatient mental health settings, and precautions facilities take to prevent sexual assaults and other safety incidents. GAO reviewed relevant laws, VA policies, and sexual assault incident documentation from January 2007 through July 2010 provided by VA officials and the VA Office of the Inspector General (OIG). In addition, GAO visited and reviewed portions of selected veterans' medical records at five judgmentally selected VA medical facilities chosen to ensure the residential and inpatient mental health units at the facilities varied in size and complexity. Finally, GAO spoke with the four Veterans Integrated Service Networks (VISN) that oversee these VA medical facilities. GAO found that many of the nearly 300 sexual assault incidents reported to the VA police were not reported to VA leadership officials and the VA OIG. Specifically, for the four VISNs GAO spoke with, VISN and VA Central Office officials did not receive reports of most sexual assault incidents reported to the VA police. Also, nearly two-thirds of sexual assault incidents involving rape allegations originating in VA facilities were not reported to the VA OIG, as required by VA regulation. In addition, GAO identified several factors that may contribute to the underreporting of sexual assault incidents including unclear guidance and deficiencies in VA's oversight. VA does not have risk assessment tools designed to examine sexual assaultrelated risks veterans may pose. Instead, VA staff at the residential programs and inpatient mental health units GAO visited said they examine information about veterans' legal histories along with other personal information as part of a multidisciplinary assessment process. VA clinicians reported that they obtain legal history information directly from veterans, but these self-reported data are not always complete or accurate. In reviewing selected veterans' medical records, GAO found that complete legal history information was not always documented. In addition, VA has not provided clear guidance on how such legal history information should be collected or documented. VA facilities GAO visited used a variety of precautions intended to prevent sexual assaults and other safety incidents; however, GAO found some of these measures were deficient, compromising facilities' efforts to prevent sexual assaults and other safety incidents. For example, facilities often used patientoriented precautions, such as placing electronic flags on high-risk veterans' medical records or increasing staff observation of veterans who posed risks to others. These VA facilities also used physical security precautions--such as closed-circuit surveillance cameras to actively monitor units, locks and alarms to secure key areas, and police assistance when incidents occurred. These physical precautions were intended to prevent a broad range of safety incidents, including sexual assaults, through monitoring patients and activities, securing residential programs and inpatient mental health units, and educating staff about security issues and ways to deal with them. However, GAO found significant weaknesses in the implementation of these physical security precautions at these VA facilities, including poor monitoring of surveillance cameras, alarm system malfunctions, and the failure of alarms to alert both VA police and clinical staff when triggered. Inadequate system installation and testing procedures contributed to these weaknesses. Further, facility officials at most of the locations GAO visited said the VA police were understaffed. Such weaknesses could lead to delayed response times to incidents and seriously erode efforts to prevent or mitigate sexual assaults and other safety incidents. GAO recommends that VA improve both the reporting and monitoring of sexual assault incidents and the tools used to identify risks and address vulnerabilities at VA facilities. VA concurred with GAO's recommendations and provided an action plan to address them. |
The term “day trading” has various definitions. In 1999, day trading was commonly described as a trading strategy that involved making multiple purchases and sales of the same securities throughout the day in an attempt to profit from short-term price movements. Since that time, the definition has evolved. For example, NASDR and NYSE use two definitions of day trading in the recent amendments to their margin rules. First, NYSE Rule 431(f)(8)(B)(I) and NASDR Rule 2520(f)(8)(b) generally define day trading as “the purchasing and selling or the selling and purchasing of the same security in the same day in a margin account.” Second, both NYSE and NASD define a “pattern” day trader as a customer who executes four or more day trades within 5 business days, unless the number of day trades does not exceed 6 percent of their total trading activity for that period. Additionally, NASDR’s rule on approval procedures for day trading accounts defines a day trading strategy as “an overall trading strategy characterized by the regular transmission by a customer of intra-day orders to affect both purchase and sale transactions in the same security or securities.” In this report, we define day trading as consistently both buying and selling the same securities intraday via direct access technology to take advantage of short-term price movements. Day trading firms use sophisticated order routing and execution systems technology that allows traders to monitor and access the market on a real- time basis. This technology allows traders direct access to stock markets through Nasdaq Level II screens that display real-time best bid (buy) and ask (sell) quotes for any Nasdaq or over-the-counter security, including quotes between market makers trading for their own inventories. Day traders also conduct transactions through electronic communications networks (ECNs), which allow customers’ orders to be displayed to other customers and allow customers’ orders to be paired. As a result of this technology, day traders have the tools to trade from their own accounts without an intermediary such as a stock broker and can employ techniques that were previously available only to market makers and professional traders. Both rules define day trading as the purchasing and the selling or the selling and purchasing of the same security on the same day in a margin account. There are two exceptions to this definition: first, a long security position held overnight and sold the next day prior to any new purchase of the same security; and second, a short security position held overnight and purchased the next day prior to any new sale of the same security. Day trading firms register with SEC and become members of one of the SROs, such as NASD or the Philadelphia Stock Exchange; they are therefore subject to regulation by SEC and an SRO. As registered broker- dealers, day trading firms are required to comply with all pertinent federal securities laws and SRO rules. SROs generally examine every broker- dealer anywhere from annually to up to every four years, depending on the type of firm. Day trading firms are also subject to the securities laws and oversight of the states in which they are registered. In 1999, state and federal regulators began to identify concerns about certain day trading firms’ activities. In 1999, state regulators examined and initiated disciplinary action against several day trading firms and identified several areas of concern. SEC completed an examination sweep of 47 day trading firms in 1999 and subsequently issued a report. According to SEC’s report, the examinations did not reveal widespread fraud, but examiners found indications of serious violations of securities laws related to net capital, margin, and customer lending. However, most of the examinations revealed less serious violations and concluded that many firms needed to take steps to improve their compliance with net capital, short sale, and supervision rules. NASDR also initiated a series of focused examinations of day-trading firms that focused on the firms’ advertising and risk disclosures, among other areas. SEC and NASDR also initiated several enforcement actions against day trading firms and individuals in early 2000. In our 2000 report, we found that day trading among less-experienced traders was an evolving segment of the securities industry. Day traders represented less than one-tenth of 1 percent, or about 1 out of 1,000, of all individuals who bought or sold securities. However, day trading was estimated by some to account for about 15 percent of Nasdaq’s trading volume. Although no firm estimates exist for the number of active day traders, many regulatory and industry officials we spoke with generally thought 5,000 was a reasonable estimate and believed the number was stable or had gone down slightly. However, the number of open accounts at day trading firms is likely much higher. We also noted in our 2000 report that before 1997, day traders submitted most of their orders through the Small Order Execution System (SOES). We concluded that the effects of day trading in an environment that depends less on SOES and more on ECNs are uncertain. Because of these findings and our work in this area, we recommended that after decimal trading is implemented, SEC should evaluate the implications of day traders’ growing use of ECNs on the integrity of the markets. We also recommended that SEC do an additional cycle of targeted examinations of day trading firms to ensure that the firms take the necessary corrective actions proposed in response to previous examination findings. Concerns about day trading culminated in hearings before the Permanent Subcommittee on February 24 and 25, 2000, and the ultimate issuance of a report by the Permanent Subcommittee in July 2000. The Permanent Subcommittee expressed its concerns about certain industry practices at the hearing and made several recommendations in its subsequent report. In general, the recommendations suggested changes to NASDR’s disclosure rules and margin rule amendments and summarized comments Permanent Subcommittee Members had submitted to SEC when those rules were published for comment in the Federal Register. In addition, the Permanent Subcommittee recommended that NASDR prohibit firms from arranging loans between customers to meet margin requirements and that firms be required to develop policies to ensure that individual day traders acting as investment advisors are properly registered. Since 1999, day traders as a group and firms that offer day trading capability have continued to evolve. Most regulators and industry officials we spoke with said that day traders are generally more experienced and that fewer customers are quitting their jobs to become day traders. We also found that many day trading firms now market to institutional customers, such as hedge funds and money market managers, rather than focusing on retail customers. In addition, more day trading firms are likely to engage in proprietary trading through professional traders who trade the firms’ capital rather than their own and earn a percent of the profits. Finally, we found that traditional and on-line brokers and other entities that want to offer their customers direct access to securities markets are acquiring day trading firms. A concern raised in 1999 was that day trading firms were marketing to inexperienced traders who did not fully understand the risks of day trading and therefore lost substantial amounts of money. Some industry and regulatory officials said the combination of intense regulatory scrutiny and adverse market conditions in late 2000 and into 2001 have driven many unsophisticated traders out of day trading. Traders currently engaged in day trading are more likely to be experienced and to have a greater knowledge of the risks involved than traders in 1999. Industry officials said that many traders gained their experience by day trading for several years, while others were professional traders who became day traders. During our first review, regulatory and government officials were particularly concerned that day trading firms were attracting customers who were ill-suited for day trading because they lacked either the capital or the knowledge to engage in such a risky activity. Since 1999, day trading firms have begun to focus on institutional as well as retail customers, including hedge funds and small investment management companies. According to press reports, All-Tech Direct, Inc., a day trading firm, announced in August 2001 that it planned to get out of the retail business completely and was severing its relationship with all of its retail branches. Overall, institutional investors are increasingly interested in the kind of high-speed order execution that day traders get from direct access systems and the relatively low fees day traders pay to execute trades. In addition, some day trading firms that focused solely on retail customers in 1999 have since hired professional traders who trade the firms’ capital (proprietary traders). For some, this move reflects a departure from their retail customer focus. A few officials said many of their retail customers started as proprietary traders and learned to trade by using the firm’s capital rather than their own. Another change involves the growth in the number of day trading firms being acquired by other brokerages and in market participants that want the direct access technology. For example, since 1999 on-line brokers Charles Schwab and Ameritrade have purchased CyberCorp. (former CyberTrader) and Tradecast, respectively. Likewise, in August 2001 T.D. Waterhouse Group Inc. announced plans to purchase one of the smaller day trading firms, R.J. Thompson Holdings. In addition, Instinet, an ECN, purchased ProTrader as a way to offer direct access technology to its customers. Moreover, financial conglomerates are also moving toward offering fully integrated services, which include all aspects of a securities purchase, from direct access to securities markets to clearing capabilities. In September 2000, Goldman Sachs announced its planned acquisition of Spear Leeds & Kellogg, which offers such fully integrated services. Other firms with fully integrated capabilities include on-line brokerages such as Ameritrade and Datek, as well as an ECN, Instinet. Some regulatory and industry officials said that they expect traditional and discount brokerages to continue to acquire day trading firms, as these brokerages face increased pressure to provide direct market access to their more active traders (estimated at between 50,000 and 75,000). Some analysts also said that the growing trend toward direct access has been driven not only by competitive pressure but also by SEC’s new disclosure rules on order handling and trade execution, which require ECNs, market makers, and specialists to report execution data on order sizes, speed, and unfilled orders. In addition, by the end of November 2001 brokers are required to disclose the identity of the market centers to which they route a significant percentage of their orders and the nature of the broker’s relationships with these market centers, including any payment for order flow. By offering customers direct access to markets, the customer rather than the broker determines where trades are executed. Since our 2000 review, SEC and the SROs have taken various actions involving day trading activities. Specifically, NASDR has adopted rules that require firms to provide customers with a risk disclosure statement and to approve the customer’s account for day trading. In addition, NASDR and NYSE have amended their margin rules for day traders to impose more restrictive requirements for pattern day traders. NASDR’s margin rule amendments became effective on September 28, 2001, and NYSE’s became effective on August 27, 2001. SEC and the SROs have also continued to monitor and examine day trading firms and their activities to ensure compliance with securities laws. Finally, SEC and NASDR have settled several pending enforcement cases involving day trading securities firms and their principals. In 2000 and 2001, the SROs adopted day trading rules related to improved risk disclosure and stricter margin requirements. On July 10, 2000, SEC approved NASDR Rule 2360, Approval Procedures for Day-Trading Accounts, which requires firms that promote a day trading strategy to either 1) approve the customer’s account for a day trading strategy or 2) obtain from the customer a written agreement that the customer does not intend to use the account for day trading purposes. SEC also approved NASDR Rule 2361, Day-Trading Risk Disclosure Statement, which requires firms that promote a day trading strategy to furnish a risk-disclosure statement that discusses the unique risks of day trading to customers prior to opening an account. The new rules became effective on October 16, 2000. NASDR Rule 2361 provides a disclosure statement that, among other things, warns investors that day trading can be risky and is generally not appropriate for someone with limited resources, little investment or trading experience, or tolerance for risk (see table 1). The statement further maintains that evidence suggests that an investment of less than $50,000 significantly affects the ability of a day trader to make a profit. The disclosure statement contained in NASDR Rule 2361 incorporated many of the recommendations the Permanent Subcommittee Members made in a comment letter to SEC and subsequently summarized in its July 27, 2000, report. The italicized text in table 1 generally represents the Permanent Subcommittee’s recommended changes that NASDR adopted. Although many of the Permanent Subcommittee’s recommendations were incorporated into the final disclosure statement, NASDR did not adopt all of them. For example, NASDR did not directly adopt the Permanent Subcommittee’s recommendations that firms presume that customers who open accounts with less than $50,000 are generally inappropriate for day trading or that firms be required to prepare and maintain records setting forth the reasons why customers with less than $50,000 are considered appropriate for day trading. Instead, NASDR incorporated the Permanent Committee’s concern about the significance of the $50,000 threshold into the disclosure statement. NASDR decided not to directly incorporate these recommendations for several reasons. First, it believed that a $50,000 threshold might make sense for some investors but could be too high or too low for others. Second, NASDR was concerned that such a requirement could encourage investors to inflate the value of their assets. Lastly, NASDR’s rule (as proposed) already required a firm to document the basis on which it approved an account for day trading. In February 2001, SEC approved substantially similar amendments to NASDR and NYSE rules proposing more restrictive margin requirements for day traders. Prior to the adoption of the NASDR and NYSE amendments, margin requirements were calculated on the basis of a customer’s open positions at the end of the trading day. A day trader often has no open positions at the end of the day on which a margin calculation can be based. However, the day trader and the firm are at financial risk throughout the day if credit is extended. To address that risk, the NASDR and NYSE rule amendments require “pattern day traders” to demonstrate that they have the ability to meet a special maintenance margin requirement for at least their largest open position during the day. Customers who meet the definition of pattern day trader under the rules must generally deposit 25 percent of the largest open position into their accounts. Both rule amendments require customers who meet the definition of a pattern day trader to have minimum equity of $25,000 in their accounts. Funds deposited into these accounts to meet the minimum equity requirement must remain there for a minimum of 2 business days following the close of business on the day a deposit was required. In addition, the rule amendments permit day trading buying power of up to four times excess margin and impose a day trading margin call on customers who exceed their day trading buying power. In addition, until the margin call is met, day trading accounts are restricted to day trading buying power of two times excess margin, calculated on the basis of the cost of all day trades made during the day. If the margin call is not met by the 5th business business day, day traders are limited to trading on a cash- available basis for 90 days or until the call is met. Funds deposited in an account to meet a day trading margin call must also remain in the account for 2 business days. The rule amendments also prohibit cross-guarantees to meet day trading minimum equity requirements or day trading margin calls. These more stringent margin requirements respond to concerns raised about the risks day trading can pose to traders, firms, and securities markets in general. The amendments as finalized do not fully incorporate the Permanent Subcommittee’s recommendation that the minimum equity requirement be raised from $2,000 to $50,000. Instead, SEC approved a $25,000 minimum. NASDR believes that a $25,000 minimum equity requirement will provide “protection against continued losses in day trading accounts, while refraining from excessive restrictions on day traders with limited capital.” Moreover, both NASDR and NYSE said that broker-dealers have the option of increasing the minimum requirement based on their own policies and procedures. The Permanent Subcommittee also recommended that the margin ratio not be increased to four times excess equity from its previous level of two times. NASDR and NYSE disagreed with this proposed change, because allowing day traders to trade at a 4:1 ratio brings day trading accounts into parity with ordinary NASDR and NYSE maintenance margin account requirements, which are 25 percent, or 4:1. Moreover, officials said the change was appropriate when considered in conjunction with the other changes to the margin rules, such as the increased minimum equity requirement, the immediate consequences imposed if day trading buying power is exceeded, and the 2-day holding period for funds used to meet day trading margin requirements. The Permanent Subcommittee also recommended that NASDR propose a rule prohibiting firms from arranging loans between customers to meet margin calls. NASDR is continuing to review this issue but has not proposed rules that directly address firms’ involvement in arranging such loans. However, industry officials believe that the new margin rules indirectly address this issue because the amendments will make such lending arrangements less attractive to lenders. For example, as mentioned previously, funds deposited to meet a margin call must be left in a trader’s account for two full business days following the close of business on any day when a deposit is required, substantially increasing the risks to the lender. Previously, funds could be held in an account overnight to meet the margin call requirement. Consistent with our 2000 report recommendation, SEC has continued to examine the activities of day trading firms. Specifically, since SEC’s initial sweep of 47 day trading firms from October 1998 to September 1999 and subsequent report, SEC, NASDR, and Philadelphia Stock Exchange staff have conducted examinations of all the 133 day trading firms that were identified in 2000. In addition, SEC and the SROs have done follow-up examinations to determine whether the previous violations have been corrected. Moreover, NASDR officials said they prepared a special examination module for these follow-up examinations that focused on identified problem areas. According to SEC, in 2001 and 2002, SRO staff will continue to conduct routine examinations of existing day trading firms and of newly registered firms to determine compliance with applicable rules. For example, NASDR officials said that they are no longer prioritizing day trading firms for review; instead, these firms are now examined during the routine broker-dealer examination cycle or when they first register. As of August 2001, NASDR had completed about 62 such examinations. In addition, SEC said that it would continue to initiate cause examinations when appropriate. From late 1999 to early 2001, almost half of the day trading firm examinations completed by SEC were cause examinations. According to SEC and NASDR officials, day trading firms’ overall compliance with rules has improved since the 1999 sweep. Officials said that while the examinations revealed violations of margin rules, short sale rule violations, misleading advertisements, and net capital deficiencies, these types of violations were occurring less frequently. SEC also identified violations of SRO and SEC rules related to supervision, maintenance of books and records, and the net capital calculation. SEC and NASDR officials said that net capital and supervision violations are not uncommon among broker-dealers in general. We reviewed 42 SEC and 62 NASDR examination reports completed between the end of the 1999 sweep and August 2001 that looked at broker- dealers and their branches offering day trading as a strategy. Overall, written supervisory procedure failures were the most frequent violation, followed by net capital rule miscalculations. Table 2 shows the number of examinations that included violations in each area. However, many of the violations cited in the examination reports were violations that are often cited at all types of broker-dealers and were not directly related to the firm’s day trading activity, which in some cases was a small part of the firm’s overall operation. Common supervisory procedure violations involved failure to have adequate written procedures that reflect the types of business in which the firm engages. For example, some broker-dealers had added day trading to their offered services but had not changed their written supervisory procedures to address this new activity. Other firms were cited for failure to follow their internal supervisory procedures. Many of the net capital rule violations involved calculation and reporting errors. Compared with the written supervisory procedure and net capital rule violations, fewer examinations had short sale, advertising, and margin and customer-lending rule violations. The short sale rule violations included failing to properly indicate trades as “short” (sale) or “long” (purchase), effecting short sales below the price at which the last sale was reported or on a zero-minus tick, and improperly marking short orders as long without first making an affirmative determination that the securities were in the trader’s account or ready to be delivered prior to settlement. Although examiners continued to find some advertising violations involving omissions of fact and misstatements, many of the violations involved failure to properly maintain advertising files and other documentation requirements. For example, firms were cited for failure to document advertising approvals and make required submissions to NASDR. The customer lending and margin violations involved failure to secure additional funds to cover margin calls and allowing traders to trade when the Regulation T margin requirement had not been met. Numerous other deficiencies were also cited, including failure to inform customers who access SelectNet that NASD monitors trading activity and that the customers can be subject to prosecution for violations of securities laws, improper registration issues such as failure to properly register branches, and improper registration of traders. Of the SEC examinations reviewed, 34 resulted in deficiency or violation letters, 3 indicated that no violations had been found, and 7 resulted in a referral to an SRO or to SEC’s Division of Enforcement. Of the NASDR examinations we reviewed, 39 resulted in a letter of caution, 5 resulted in a compliance conference, 12 were filed without action, and at least 2 resulted in formal complaints or referrals to SEC or NASDR Enforcement. Since the enforcement actions announced in February 2000, NASDR and SEC have settled several disciplinary actions against day trading firms and their principals, including fines, civil money penalties, censures, and the expulsion of one firm from the business. SEC brought several enforcement actions related to day trading in June 2001. First, SEC instituted and settled proceedings against JPR Capital Corporation and several of the firm’s current and former executives. SEC found that the firm had violated federal margin lending rules, among other things. All of the respondents to the proceedings consented to SEC’s order without admitting or denying the allegations, agreed to pay civil money penalties, and consented to other relief. The firm was censured and ordered to pay a civil penalty of $55,000 to “cease and desist” from committing or causing any violations of specified laws and rules and to comply with initiatives designed to improve its own compliance department. Second, SEC settled its previously instituted proceeding against All-Tech Direct, Inc. and certain of its employees for extending loans to customers in excess of limits allowed under federal margin rules. SEC censured All-Tech Direct and ordered the firm to cease and desist from committing or causing any violations of the federal margin lending rules, to pay a $225,000 civil penalty, and to retain an independent consultant selected by SEC to review and recommend improvements to All-Tech Direct’s margin lending practices. As shown in table 3, NASDR also announced enforcement actions in June 2001 against six firms and several individuals that addressed violations of federal securities laws and NASDR rule violations in the following areas: advertising, registration, improper loans to customers, improper sharing of commissions, short sale rules, trade reporting, and deficient supervisory procedures. Without admitting or denying the allegations, the firms and individuals agreed to the sanctions, which included censures, the expulsion of one firm, suspensions, and fines against the firms and individuals ranging from $5,000 to $250,000. According to NASDR officials, these settlements resulted from violations that occurred in prior years. While any violation is a serious issue, regulatory officials said that many of these issues have been addressed and that compliance among day trading firms is generally improving. For example, NASDR officials said that they are seeing far fewer misleading advertisements than in 1999. In August 2001, All-Tech Direct also lost an arbitration proceeding involving allegations of misleading advertising. Four traders filed arbitration proceedings against All-Tech Direct for losses incurred in their day trading accounts. Although firm officials said that the traders lost money when they held open positions overnight—a practice day trading firms usually do not recommend—the arbitration panel ruled in favor of the plaintiffs and awarded them a total of over $456,000. All-Tech Direct officials said they plan to appeal the ruling. As mentioned previously, All-Tech Direct has announced plans to sever its relationship with all of its retail branches. In October 2001, All-Tech Direct filed the necessary paperwork to withdraw its registration as a broker-dealer. In addition to the ongoing changes in day trading and in regulatory oversight of the activity, many day trading firms have responded to changing market conditions and regulatory scrutiny. According to some industry and regulatory officials, day trading firms are generally viewed as more knowledgeable and sophisticated in terms of regulatory compliance and management than they were in 1999. We found that most Web sites of day trading firms prominently highlighted the risks associated with day trading or provided easy-to-access risk disclosures or disclaimers. In addition, the sites focused on the speed of trade executions and lower fees rather than on profits. We interviewed officials from seven day trading firms and found that many of these firms no longer actively advertise for retail customers, relying instead on personal referrals. However, other day trading firms continue to advertise, and many allow customers to open an account online via their Web site. Day trading firms have adjusted the way they operate in response to changing market conditions and regulatory scrutiny. Firm management is generally viewed as more seasoned and sophisticated than it was in 1999. Industry officials said that in general most firms have matured and provide more vigorous oversight than in the past. In addition to the downturn in the securities markets, particularly in the technology sector, day traders and the firms in which they trade have had to adjust to certain market changes. The first of these was decimalization, which resulted in smaller spreads between bid and ask prices. Some industry officials said that the change has made it more difficult for day traders to make profits. As a result, these officials said that they have advised their traders to trade less frequently and in smaller lot sizes. The second change, the movement to SuperSoes and ultimately SuperMontage, is also expected to result in changes to how day traders operate. However, SuperMontage is not expected to be fully implemented until 2002. Given these ongoing changes in markets, SEC has not evaluated the growing use of ECNs by day traders on the integrity of the markets. Regulators and industry officials also said that firms now have more sophisticated monitoring systems, an area of concern identified by regulators in 1999. The firms we visited all had systems that allowed them to monitor the activity of each of their traders (retail and proprietary). In addition, many had set preestablished loss limits for traders. For example, one firm halted trading for customers who lost 30 percent of their equity in a single day. Further, some had systems that allowed them to prevent short sale violations by keeping traders from shorting ineligible stocks. These firms also had compliance departments that were responsible for monitoring the activities of the traders, and some provided regular reports to traders that detailed each trader’s daily activity and positions. Consistent with the findings of SEC and the SROs, we found that the Web sites of firms identified as offering day trading services provided prominent, easy-to-find risk disclosures or disclaimers about day trading. Specifically, 122 of 133 or about 92 percent of the Web sites we were able to access between July and November 2000 had risk disclosures or disclaimers. Many of the firms (and branches) used the NASDR risk disclosure statement or some similar variation. In addition, some provided links to SEC and NASDR Web sites for additional information about the risks of day trading. Rather than claims of easy profitability, many of the sites now focus on trade execution speed and low fees and commissions. Of the 125 firms accepting customers, some 57 firms and their branches allowed customers to file applications online, while 67 required that account applications be faxed or mailed. Some 40 offered training opportunities or links to other providers, and 20 had employment opportunities for traders. Since 1999, day trading has continued to evolve. In general, today’s day traders appear to be more experienced and knowledgeable about securities markets than many day traders in the late 1990s. Likewise, many day trading firms have begun to focus on institutional traders as well as retail customers, and more firms are likely to engage in proprietary trading. Finally, other market participants are seeking the direct access technology offered by day trading firms in order to be able to offer fully integrated services. Regulators have taken various actions in response to concerns raised about day trading. Implementation of disclosure rules and amendments to margin rules have directly or indirectly addressed many of the concerns raised by the Permanent Subcommittee. Moreover, SEC and the SROs have continued to scrutinize the activities of day trading firms since our 2000 report. We recommended that SEC conduct another sweep of day trading firms, given their growing portion of Nasdaq trading volume and the fact that day trading is an evolving part of the industry. SEC addressed this recommendation through follow-up examinations of the firms included in the previous day trading sweep and ongoing examinations of day trading firms. The SROs have performed and plan to continue to perform routine examinations of broker-dealers offering day trading as a strategy. Moreover, SEC plans to continue to conduct cause examinations as needed to maintain a certain degree of scrutiny of these firms’ activities. Given the recent move to decimals and ongoing changes in the securities markets, SEC has not yet formally evaluated day trading’s effect on markets but officials generally believe that many of the initial problems surrounding these firms have been addressed. Finally, the firms themselves have adjusted their behavior in response to market changes and regulatory scrutiny. The most noticeable changes appear in their advertising and Web site information, which in many cases now generally highlight the risks associated with day trading and the fact that day trading is not for everyone. Changes in market conditions appear to have driven many unsophisticated traders out of day trading, and increased disclosure about risks and continued regulatory oversight should help deter such traders from being lured into day trading by prospects of easy profits when market conditions improve. We requested comments on a draft of this report from the Chairman, SEC, and the President, NASDR. The Director, Office of Compliance Inspections and Examinations, SEC, and the President, NASDR, responded in writing and agreed with the report’s findings and conclusions. We also received technical comments and suggestions from SEC and NASDR that have been incorporated where appropriate. To determine how day traders and day trading firms’ operations have changed since 1999, we collected data from day trading firms, SEC, NASDR, and other relevant parties. To determine what types of changes have occurred in day trading, we reviewed available research on the subject and interviewed state and federal regulators, as well as several knowledgeable industry officials from seven of the larger day trading firms (including six of the seven we had interviewed previously). We compared these responses with the information we obtained in our 2000 report. Specifically, we obtained insights from regulatory and industry officials on overall changes in day trading and in the number of day traders. We discussed changes in the markets, such as decimalization, and how the move to decimals has impacted day traders. We also discussed common trends among day traders and day trading firms. In addition, we collected information on changes specific to individual firm operations. Finally, we also discussed the concerns raised and recommendations made by the Permanent Subcommittee and GAO in the respective 2000 reports. To identify the actions regulators have taken to address the Permanent Subcommittee’s concerns about day trading and our report recommendations, we met with officials from SEC and NASDR to discuss their actions involving day trading oversight. We also reviewed 104 examination reports that had been completed since 1999. We determined the frequency of the violations and the actions taken by SEC and NASDR in response to those violations. We spoke with a state regulatory official from Massachusetts and an official of the North American Securities Administrators Association about day trading and state regulatory oversight activities. Finally, we reviewed newly implemented or amended rules affecting day trading to determine whether they addressed the Permanent Subcommittee’s recommendations. To identify any actions taken by day trading firms in response to concerns raised about day trading, we interviewed officials from six of the seven day trading firms we identified in our 2000 report and from one additional firm about the initiatives the firms were taking pertaining to issues raised by the regulators and Congress. These issues included advertising, risk disclosure, margin issues, and determinations of appropriateness. We also discussed how the firms’ operations had changed over the previous 2 years. In addition, we reviewed the Web sites of over 200 firms that we identified as day trading firms (some were actually branches of other firms). We reviewed the sites and obtained information on the account opening process, training offers, proprietary trading opportunities, and risk disclosures, among other things. We conducted our work in Jersey City and Montvale, NJ; New York, NY; Austin and Houston, TX; and Washington, D.C., between April and November 2001 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from its issuance date. At that time, we will send copies of this report to the Chairman and Ranking Minority Member of the Senate Committee on Banking, Housing and Urban Affairs; the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs and Permanent Subcommittee on Investigations; Chairmen of the House Committee on Financial Services and its Subcommittee on Capital Markets, Insurance and Government Sponsored Enterprises; Chairmen of the House Energy and Commerce Committee and its Subcommittees on Commerce, Trade and Consumer Protection and on Telecommunications and the Internet; and other congressional committees. We will also send copies to the Chairman of SEC, the Presidents of NASDR and NYSE. Copies will also be made available to others upon request. If you or your staff have any questions regarding this report, please contact Orice M. Williams or me at (202) 512-8678. Key contributors to this report were Toayoa Aldridge, Robert F. Pollard, and Sindy Udell. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to our home page and complete the easy-to-use electronic order form found under “To Order GAO Products.” Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: fraudnet@gao.gov, or 1-800-424-5454 (automated answering system). | Concerns arose in the late 1990s about day trading, particularly the use of questionable advertising to attract customers without fully disclosing or by downplaying the risks involved. Concerns were also raised that traders were losing large amounts of money. Day traders as a group and day trading firms have continued to evolve and are generally more experienced and sophisticated about securities markets and investing than was the case several years ago. Likewise, day trading firms' operations have evolved, and many have shifted their primary focus away from retail customers and toward attracting institutional customers, such as hedge funds and money market managers. Furthermore, more firms are likely to engage in proprietary trading activities through professional traders that trade the firms' own capital. Finally, although the number of day trading firms appears to have remained constant, several day trading firms have been acquired by other brokerages and market participants whose customers want the direct access to securities markets and market information that technology used by day trading firms provides. Since GAO's 2000 review, the Securities and Exchange Commission and the self-regulatory organizations have addressed many of the concerns raised about day trading. In addition to the ongoing changes in the industry and regulatory action, day trading firms have responded to changing market conditions and regulatory scrutiny by changing their behavior. The most noticeable changes appear in their advertising and website information, which now generally highlight the risks associated with day trading and the fact that day trading is not for everyone. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.